296
Design Exploration User's Guide Release 15.0 ANSYS, Inc. February 2014 Southpointe 275 Technology Drive Canonsburg, PA 15317 ANSYS, Inc. is certified to ISO 9001:2008. [email protected] http://www.ansys.com (T) 724-746-3304 (F) 724-514-9494

Design Exploration Users Guide

Embed Size (px)

DESCRIPTION

Because a good design point is often the result of a trade-off between various objectives, the explorationof a given design cannot be performed by using optimization algorithms that lead to a single designpoint. It is important to gather enough information about the current design so as to be able to answerthe so-called “what-if” questions – quantifying the influence of design variables on the performance ofthe product in an exhaustive manner. By doing so, the right decisions can be made based on accurateinformation - even in the event of an unexpected change in the design constraints.

Citation preview

Page 1: Design Exploration Users Guide

Design Exploration User's Guide

Release 15.0ANSYS, Inc.February 2014Southpointe

275 Technology DriveCanonsburg, PA 15317 ANSYS, Inc. is

certified to ISO9001:2008.

[email protected]://www.ansys.com(T) 724-746-3304(F) 724-514-9494

Page 2: Design Exploration Users Guide

Copyright and Trademark Information

© 2013 SAS IP, Inc. All rights reserved. Unauthorized use, distribution or duplication is prohibited.

ANSYS, ANSYS Workbench, Ansoft, AUTODYN, EKM, Engineering Knowledge Manager, CFX, FLUENT, HFSS, AIMand any and all ANSYS, Inc. brand, product, service and feature names, logos and slogans are registered trademarksor trademarks of ANSYS, Inc. or its subsidiaries in the United States or other countries. ICEM CFD is a trademarkused by ANSYS, Inc. under license. CFX is a trademark of Sony Corporation in Japan. All other brand, product,service and feature names or trademarks are the property of their respective owners.

Disclaimer Notice

THIS ANSYS SOFTWARE PRODUCT AND PROGRAM DOCUMENTATION INCLUDE TRADE SECRETS AND ARE CONFID-ENTIAL AND PROPRIETARY PRODUCTS OF ANSYS, INC., ITS SUBSIDIARIES, OR LICENSORS. The software productsand documentation are furnished by ANSYS, Inc., its subsidiaries, or affiliates under a software license agreementthat contains provisions concerning non-disclosure, copying, length and nature of use, compliance with exportinglaws, warranties, disclaimers, limitations of liability, and remedies, and other provisions. The software productsand documentation may be used, disclosed, transferred, or copied only in accordance with the terms and conditionsof that software license agreement.

ANSYS, Inc. is certified to ISO 9001:2008.

U.S. Government Rights

For U.S. Government users, except as specifically granted by the ANSYS, Inc. software license agreement, the use,duplication, or disclosure by the United States Government is subject to restrictions stated in the ANSYS, Inc.software license agreement and FAR 12.212 (for non-DOD licenses).

Third-Party Software

See the legal information in the product help files for the complete Legal Notice for ANSYS proprietary softwareand third-party software. If you are unable to access the Legal Notice, please contact ANSYS, Inc.

Published in the U.S.A.

Page 3: Design Exploration Users Guide

Table of Contents

ANSYS DesignXplorer Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Introduction to ANSYS DesignXplorer ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Available Tools ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Design Exploration: What to Look for in a Typical Session .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Initial Simulation: Analysis of the Product Under all Operating Conditions .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Identify Design Candidates .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Assess the Robustness of the Design Candidates .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Determine the Number of Parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Increase the Response Surface Accuracy .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Limitations .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5ANSYS DesignXplorer Licensing Requirements .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5The User Interface .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Design Points ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Response Points ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Workflow .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Adding Design Exploration Templates to the Schematic ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Duplicating Existing DesignXplorer Systems .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Running Design Exploration Analyses .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Monitoring Design Exploration Analyses .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Reporting on the Design Exploration Analysis ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Using DesignXplorer Charts ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Exporting and Importing Data .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Design Exploration Options .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

DesignXplorer Systems and Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27What is Design Exploration? .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27DesignXplorer Systems .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Parameters Correlation System ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Response Surface System ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Goal Driven Optimization Systems .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Six Sigma Analysis System ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

DesignXplorer Components .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Design of Experiments Component Reference .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

Parameters Parallel Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30Design Points vs Parameter Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Parameters Correlation Component Reference .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Correlation Scatter Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Correlation Matrix Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Determination Matrix Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Sensitivities Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Determination Histogram Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Response Surface Component Reference .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35Response Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Local Sensitivity Charts ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Spider Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Optimization Component Reference .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Convergence Criteria Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43History Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43Candidate Points Results ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Tradeoff Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

iiiRelease 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 4: Design Exploration Users Guide

Samples Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Sensitivities Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Six Sigma Analysis Component Reference .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46Design of Experiments (SSA) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46Six Sigma Analysis ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Sensitivities Chart (SSA) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Using Parameters Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Sample Generation .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Running a Parameters Correlation .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Viewing the Quadratic Correlation Information .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Determining Significance .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Viewing Significance and Correlation Values .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Parameters Correlation Charts ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Using the Correlation Matrix Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Using the Correlation Scatter Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Using the Determination Matrix Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59Using the Determination Histogram Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60Using the Sensitivities Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Using Design of Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Setting Up the Design of Experiments .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Design of Experiments Types .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Central Composite Design (CCD) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Optimal Space-Filling Design (OSF) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Box-Behnken Design .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Custom ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Custom + Sampling .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Sparse Grid Initialization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Latin Hypercube Sampling Design (LHS) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Number of Input Parameters for DOE Types .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69Comparison of LHS and OSF DOE Types .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Using a Central Composite Design DOE .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71Upper and Lower Locations of DOE Points ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73DOE Matrix Generation .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74Importing and Copying Design Points ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Using Response Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Meta-Model Types .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Standard Response Surface - Full 2nd-Order Polynomial ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Kriging .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Using Kriging Auto-Refinement .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Kriging Auto-Refinement Properties ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81Kriging Convergence Curves Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

Non-Parametric Regression .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83Neural Network .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83Sparse Grid .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

Using Sparse Grid Refinement .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86Sparse Grid Auto-Refinement Properties ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87Sparse Grid Convergence Curves Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

Meta-Model Refinement .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Working with Meta-Models ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Changing the Meta-Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90Performing a Manual Refinement .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Goodness of Fit ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.iv

Design Exploration User's Guide

Page 5: Design Exploration Users Guide

Predicted versus Observed Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95Advanced Goodness of Fit Report ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96Using Verification Points ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Min-Max Search .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98Response Surface Charts ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Using the Response Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101Understanding the Response Chart Display .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102Using the 2D Contour Graph Response Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104Using the 3D Contour Graph Response Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105Using the 2D Slices Response Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107Response Chart: Example .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Response Chart: Properties ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Using the Spider Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Using Local Sensitivity Charts ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Understanding the Local Sensitivities Display .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113Using the Local Sensitivity Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Local Sensitivity Chart: Example .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116Local Sensitivity Chart: Properties ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Using the Local Sensitivity Curves Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118Local Sensitivity Curves Chart: Example .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120Local Sensitivity Curves Chart: Properties ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

Using Goal Driven Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125Creating a Goal Driven Optimization System ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126Transferring Design Point Data for Direct Optimization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127Goal Driven Optimization Methods .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Performing a Screening Optimization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130Performing a MOGA Optimization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132Performing an NLPQL Optimization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135Performing an MISQP Optimization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137Performing an Adaptive Single-Objective Optimization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139Performing an Adaptive Multiple-Objective Optimization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142Performing an Optimization with an External Optimizer ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

Locating and Downloading Available Extensions .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146Installing an Optimization Extension .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146Loading an Optimization Extension .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147Selecting an External Optimizer ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147Setting Up an External Optimizer Project ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148Performing the Optimization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

Defining the Optimization Domain .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148Defining Input Parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149Defining Parameter Relationships .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

Defining Optimization Objectives and Constraints ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153Working with Candidate Points ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

Viewing and Editing Candidate Points in the Table View .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158Retrieving Intermediate Candidate Points ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159Creating New Design, Response, Refinement, or Verification Points from Candidates .... . . . . . . . . . . . . . . . . . . . 160Verify Candidates by Design Point Update .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

Goal Driven Optimization Charts and Results ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162Using the Convergence Criteria Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

Using the Convergence Criteria Chart for Multiple-Objective Optimization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . 163Using the Convergence Criteria Chart for Single-Objective Optimization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

Using the History Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

vRelease 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Design Exploration User's Guide

Page 6: Design Exploration Users Guide

Working with the History Chart in the Chart View .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166Viewing History Chart Sparklines in the Outline View .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169Using the Objective/Constraint History Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170Using the Input Parameter History Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171Using the Parameter Relationships History Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

Using the Candidate Points Results ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173Understanding the Candidate Points Results Display .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173Candidate Points Results: Properties ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

Using the Sensitivities Chart (GDO) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175Using the Tradeoff Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176Using the Samples Chart ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

Using Six Sigma Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181Performing a Six Sigma Analysis ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181Using Statistical Postprocessing .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

Tables (SSA) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181Using Parameter Charts (SSA) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182Using the Sensitivities Chart (SSA) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

Statistical Measures .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182Working with DesignXplorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

Working with Parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185Input Parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

Defining Discrete Input Parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187Defining Continuous Input Parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188Defining Manufacturable Values .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188Changing Input Parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190Changing Manufacturable Values .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

Setting Up Design Variables .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191Setting Up Uncertainty Variables .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191Output Parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

Working with Design Points ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193Design Point Update Order .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195Design Point Update Location .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195Preserving Generated Design Points to the Parameter Set ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196Exporting Generated Design Points to Separate Projects ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197Inserting Design Points ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197Cache of Design Point Results ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197Raw Optimization Data .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198Design Point Log Files ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198Failed Design Points ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

Preventing Design Point Update Failures .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200Preserving Design Points and Files ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202Handling Failed Design Points ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

Working with Sensitivities ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204Working with Tables .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

Viewing Design Points in the Table View .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206Editable Output Parameter Values .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206Copy/Paste .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207Exporting Table Data .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207Importing Data from a CSV File ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

Working with Remote Solve Manager and DesignXplorer ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208Working with Design Exploration Project Reports ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

DesignXplorer Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.vi

Design Exploration User's Guide

Page 7: Design Exploration Users Guide

Understanding Response Surfaces .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213Central Composite Design .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214Box-Behnken Design .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215Standard Response Surface - Full 2nd-Order Polynomial algorithms .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216Kriging Algorithms .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217Non-Parametric Regression Algorithms .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218Sparse Grid Algorithms .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

Understanding Goal Driven Optimization .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224Principles (GDO) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225Guidelines and Best Practices (GDO) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227Goal Driven Optimization Theory .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229

Sampling for Constrained Design Spaces .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230Shifted Hammersley Sampling (Screening) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232Single-Objective Optimization Methods .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

Nonlinear Programming by Quadratic Lagrangian (NLPQL) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233Mixed-Integer Sequential Quadratic Programming (MISQP) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240Adaptive Single-Objective Optimization (ASO) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

Multiple-Objective Optimization Methods .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244Pareto Dominance in Multi-Objective Optimization ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244Convergence Criteria in MOGA-Based Multi-Objective Optimization ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245Multi-Objective Genetic Algorithm (MOGA) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246Adaptive Multiple-Objective Optimization (AMO) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

Decision Support Process .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253Understanding Six Sigma Analysis ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257

Principles (SSA) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258Guidelines for Selecting SSA Variables .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260

Choosing and Defining Uncertainty Variables .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260Uncertainty Variables for Response Surface Analyses .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260Choosing a Distribution for a Random Variable .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260

Measured Data .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260Mean Values, Standard Deviation, Exceedance Values .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261No Data .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262

Distribution Functions .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263Sample Generation .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267Weighted Latin Hypercube Sampling .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267Postprocessing SSA Results ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

Histogram ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267Cumulative Distribution Function .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268Probability Table .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269Statistical Sensitivities in a Six Sigma Analysis ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

Six Sigma Analysis Theory .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

Extended CSV File Format .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281Index .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283

viiRelease 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Design Exploration User's Guide

Page 8: Design Exploration Users Guide

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.viii

Page 9: Design Exploration Users Guide

ANSYS DesignXplorer Overview

The following links provide quick access to information concerning the ANSYS DesignXplorer and itsuse:

• Limitations (p. 5)

• What is Design Exploration? (p. 27)

• The User Interface (p. 6)

• Parameters (p. 10)

• Design Points (p. 12)

• Response Points (p. 13)

• Workflow (p. 13)

• Design Exploration Options (p. 19)

• "DesignXplorer Systems and Components" (p. 27)

Introduction to ANSYS DesignXplorer

Because a good design point is often the result of a trade-off between various objectives, the explorationof a given design cannot be performed by using optimization algorithms that lead to a single designpoint. It is important to gather enough information about the current design so as to be able to answerthe so-called “what-if” questions – quantifying the influence of design variables on the performance ofthe product in an exhaustive manner. By doing so, the right decisions can be made based on accurateinformation - even in the event of an unexpected change in the design constraints.

Design exploration describes the relationship between the design variables and the performance of theproduct by using Design of Experiments (DOE), combined with response surfaces. DOE and responsesurfaces provide all of the information required to achieve Simulation Driven Product Development.Once the variation of the performance with respect to the design variables is known, it becomes easyto understand and identify all changes required to meet the requirements for the product. Once theresponse surfaces are created, the information can be shared in easily understandable terms: curves,surfaces, sensitivities, etc. They can be used at any time during the development of the product withoutrequiring additional simulations to test a new configuration.

Available Tools

DesignXplorer has a powerful suite of DOE schemes:

• Central Composite Design (CCD) (p. 64)

• Optimal Space-Filling Design (OSF) (p. 65)

1Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 10: Design Exploration Users Guide

• Box-Behnken Design (p. 66)

• Custom (p. 67)

• Custom + Sampling (p. 67)

• Sparse Grid Initialization (p. 68)

• Latin Hypercube Sampling Design (LHS) (p. 68)

CCD provides a traditional DOE sampling set, while Optimal Space-Filling’s objective is to gain themaximum insight with the fewest number of points, a feature that is very useful when the computationtime available to the user is limited.

After sampling, design exploration provides several different meta-models to represent the simulation’sresponses. The following types of response surfaces are available:

• Standard Response Surface - Full 2nd-Order Polynomial (p. 77)

• Kriging (p. 78)

• Non-Parametric Regression (p. 83)

• Neural Network (p. 83)

• Sparse Grid (p. 84)

These meta-models can accurately represent highly nonlinear responses, such as those found in highfrequency electromagnetics.

Once the simulation’s responses are characterized, DesignXplorer supplies the following types of optim-ization algorithms:

• Shifted Hammersley Sampling (Screening) (p. 232)

• Multi-Objective Genetic Algorithm (MOGA) (p. 246)

• Nonlinear Programming by Quadratic Lagrangian (NLPQL) (p. 233)

• Mixed-Integer Sequential Quadratic Programming (MISQP) (p. 240)

• Adaptive Single-Objective Optimization (ASO) (p. 241)

• Adaptive Multiple-Objective Optimization (AMO) (p. 251)

• You can also use extensions to integrate external optimizers into the DX workflow. For more inform-ation, see Performing an Optimization with an External Optimizer (p. 146).

Several graphical tools are available to investigate a design: sensitivity plots, correlation matrices, curves,surfaces, trade-off plots and parallel charts with Pareto Front display, and Spider charts.

Correlation matrix techniques are also provided to help the user identify the key parameters of a designbefore creating response surfaces.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.2

ANSYS DesignXplorer Overview

Page 11: Design Exploration Users Guide

Design Exploration: What to Look for in a Typical Session

The main purpose of design exploration is to identify the relationship between the performance of theproduct (maximum stress, mass, fluid flow, velocities, etc.) and the design variables (dimensions, loads,material properties, etc.). Based on these results, the analyst will be able to influence the design so asto meet the product’s requirements. He will be able to identify the key parameters of the design andhow they influence the performance.

Design exploration does provide tools to analyze a parametric design with a reasonable number ofparameters. The Response Surface methods described here are suitable for problems using about 10to 15 input parameters.

Initial Simulation: Analysis of the Product Under all Operating Conditions

The first step of any design simulation is to create the simulation model. The simulation model can useanything from a single physics up to a complex multiphysics simulation involving multiple conditionsand physics coupling.

In addition to performing the standard simulation, this step is also used to define the parameters to beinvestigated. The input parameters (also called design variables) are identified, and may include CADparameters, loading conditions, material properties, etc.

The output parameters (also called performance indicators) are chosen from the simulation results andmay include maximum stresses, fluid pressure, velocities, temperatures, or masses, and can also becustom defined. Product cost could be a custom defined parameter based on masses, manufacturingconstraints, etc.

CAD parameters are defined from a CAD package or from the ANSYS DesignModeler application. Mater-ial properties are found in the Engineering Data section of the project, while other parameters will havetheir origin in the simulation model itself. Output parameters will be defined from the various simulationenvironments (mechanics, CFD, etc.). Custom parameters are defined directly in the Parameter Set tabaccessed via the Parameter Set bar in the Project Schematic.

Identify Design Candidates

Once the initial model has been created and parameters defined, the next step in the session is tocreate a response surface. After inserting a Response Surface system in the project, you need to definethe design space by giving the minimum and maximum values to be considered for each of the inputvariables. Based on this information, the Design of Experiment (DOE) part of the Response Surface systemwill create the design space sampling. Note that this sampling depends upon the choice made for theDOE scheme – usually, the default CCD scheme will provide good accuracy for the final approximation.Then, the DOE needs to be computed.

Once the DOE has been updated, a response surface is created for each output parameter. A responsesurface is an approximation of the response of the system. Its accuracy depends on several factors:complexity of the variations of the output parameters, number of points in the original DOE, and choiceof the response surface type. Several main types of response surfaces are available in DesignXplorer.As a starting point, the Standard Response Surface (based on a modified quadratic formulation) willprovide satisfying results when the variations of the output parameters is mild, while the Kriging schemewill be used for stronger variations.

After the response surfaces have been computed, the design can be thoroughly investigated using avariety of graphical and numerical tools, and valid design points identified by optimization techniques.

3Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Introduction to ANSYS DesignXplorer

Page 12: Design Exploration Users Guide

Usually, the investigation will start with the sensitivity graphs. This bar or pie chart will graphically showhow much the output parameters are locally influenced by the input parameters around a given responsepoint. Note that varying the location of the response point may provide totally different graphs.Thinking of the hill/valley analogy, if the response point is in a flat valley, the influence of the inputparameters will be small. If the point is at the top of a steep hill, the influence of the parameters willbe strong. The sensitivity graphs provide the first indication about the relative influence of the inputparameters.

The response surfaces will provide curves or surfaces that show the variation of one output parameterwith respect to one or two input parameters at a time. These curves/surfaces also are dependent onthe response point.

Both sensitivity charts and response surfaces are key tools for the analyst to be able to answer the“What-if” questions (“What parameter should we change if we want to reduce the cost?”).

DesignXplorer provides additional tools to identify design candidates. While they could be determinedby a thorough investigation of the curves, it might be convenient to be guided automatically to someinteresting candidates. Access to optimization techniques that will find design candidates from the re-sponse surfaces or other components containing design points is provided by the Goal Driven Optim-ization (GDO) systems. There are two types of GDO systems: Response Surface Optimization and DirectOptimization. These systems can be dragged and dropped over an existing Response Surface systemso as to share this portion of the data. For Direct Optimization, data transfer links can be created betweenthe Optimization component and any system or component containing design points. Several GDOsystems can be inserted in the project, which is useful if several hypotheses are to be analyzed.

Once a GDO system has been introduced, the optimization study needs to be defined, which includeschoosing the optimization method, setting the objectives and constraints, and specifying the domain.Then the optimization problem can be solved. In many cases, there will not be a unique solution to theoptimization problem, and several candidates will be identified. The results of the optimization processis also very likely to provide candidates that cannot be manufactured (a radius of 3.14523 mm is probablyhard to achieve!). But since all information about the variability of the output parameters is providedby the source of design point data, whether a response surface or another DesignXplorer component,it becomes easy to find a design candidate close to the one indicated by the optimization process thatwill be acceptable.

As a good practice, it is also recommended to check the accuracy of the response surface for the designcandidates. To do so, the candidate should be verified by a design point update, so as to check thevalidity of the output parameters.

Assess the Robustness of the Design Candidates

Once one or several design points have been identified, the probabilistic analysis will help quantify thereliability or quality of the product by means of a statistical analysis. Probabilistic analysis typically involvesfour areas of statistical variability: geometric shape, material properties, loading, and boundary conditions.For example, the statistical variability of the geometry of a product would try to capture product-to-product differences due to manufacturing imperfections quantified by manufacturing tolerances.Probabilistic characterization provides a probability of success or failure and not just a simple yes/noevaluation. For instance, a probabilistic analysis could determine that one part in 1 million would fail,or the probability of a product surviving its expected useful life.

Inserting a Six-Sigma system in the schematic will provide the robustness analysis. The process here isvery similar to the response surface creation. The main difference is in the definition of the parameters:instead of giving minimum and maximum values defining the parameter's ranges, the parameters will

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.4

ANSYS DesignXplorer Overview

Page 13: Design Exploration Users Guide

be described in terms of statistical curves and their associated parameters. Once the parameters aredefined, a new DOE will be computed, as well as the corresponding response surfaces. Additional resultsin the form of probabilities and statistical distributions of the output parameters are provided, alongwith the sensitivity charts and the response surfaces.

Determine the Number of Parameters

To get accurate response surfaces within a reasonable amount of time, the number of input parametersshould be limited to 10 to 15. If more parameters need to be investigated, the correlation matrix willprovide a way to identify some key parameters before creating the response surface. The ParametersCorrelation systems should be used prior to the Response Surface systems, to reduce the number ofparameters to the above mentioned limit. The Parameters Correlation method will perform simulationsbased on a random sampling of the design space, so as to identify the correlation between all parameters.The number of simulations will depend upon the number of parameters, as well as the convergencecriteria for the means and standard deviations of the parameters. The user can provide a hard limit forthe number of points to be computed—it should be noted that the accuracy of the correlation matrixmight be affected if not enough points are computed.

Increase the Response Surface Accuracy

Usually, a richer set of points for the DOE will provide a more accurate response surface. However, itmay not be easy to know before the response surface is created how many additional points are required.

The Kriging response surface is a first solution to that problem. It will allow DesignXplorer to determinethe accuracy of the response surface as well as the points that would be required to increase this accur-acy. To use this feature, the Response Surface Type should be set to Kriging. The refinement optionswill then be available to the user. The refinement can be automated or user-defined. This last optionshould be used to control the number of points to be computed.

The Sparse Grid response surface is another solution. It is an adaptive meta-model driven by the accuracythat you request. It automatically refines the matrix of design points where the gradient of the outputparameters is higher in order to increase the accuracy of the response surface. To use this feature, theResponse Surface Type should be set to Sparse Grid and the Design of Experiments Type should be setto Sparse Grid Initialization.

There is also a manual refinement option for all of the response surface types but the Sparse Grid, whichallows the user to enter specific points into the set of existing design points used to calculate the re-sponse surface.

Limitations

• Suppressed properties are not available for the Design of Experiments (DOE) method.

• When you use an external optimizer for Goal Driven Optimization, unsupported properties and function-ality are not displayed in the DesignXplorer interface.

ANSYS DesignXplorer Licensing Requirements

In order to run any of the ANSYS DesignXplorer systems, you must have an ANSYS DesignXplorer license.This license must be available when you Preview or Update a DesignXplorer component or system andalso as soon as results need to be generated from the response surface.

5Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

ANSYS DesignXplorer Licensing Requirements

Page 14: Design Exploration Users Guide

You must also have licenses available for the systems that will be solving the design points generatedby the DesignXplorer systems (Design of Experiments, Parameters Correlation, and Response Surface ifa refinement is defined). The licenses for the solvers must be available when the user Updates the cellsthat generate design points.

The ANSYS DesignXplorer license will be released when:

• all DesignXplorer systems are deleted from the schematic

• the user uses the New or Open command to switch to a project without any DesignXplorer systems

• the user exits Workbench

Note

• If you do not have an ANSYS DesignXplorer license, you can successfully resume an existingdesign exploration project and review the already generated results, with some interactionpossible on charts. But as soon as an ANSYS DesignXplorer Update or a response surface eval-uation is needed, a license is required and you get an error dialog.

• In order to use the ANSYS Simultaneous Design Point Update capability you must have onesolver license for each simultaneous solve. Be aware that the number of design points that canbe solved simultaneously may be limited by hardware, by your RSM configuration, or by availablesolver licenses.

• When opening a project created prior to release 13.0, an ANSYS DesignXplorer license is initiallyrequired to generate the parametric results that were not persisted before. Once the projectis saved, a license is no longer required to reopen the project.

The User Interface

The Workbench user interface allows you to easily build your project on a main project workspace calledthe Project tab. In the Project tab, you can take DesignXplorer systems from the Toolbox and addthem to the Project Schematic. DesignXplorer systems allow you to perform the different types ofparametric analyses on your project, as described in the previous section. Once you’ve added the systemsto your Project Schematic, you edit the cells in the systems to open the tabs for those cells.

Project Schematic View

You will initially create your project containing systems and models in the Project Schematic. Thenyou can choose which of the parameters in those systems will be exposed to the DesignXplorer systemsthrough the Parameter Set bar. Once the systems and Parameter Set are defined, you can add theDesignXplorer systems that you need. A Project Schematic containing DesignXplorer systems wouldlook something like the following:

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.6

ANSYS DesignXplorer Overview

Page 15: Design Exploration Users Guide

Toolbox

When you are viewing the Project Schematic, the following system templates are available from theDesignXplorer Toolbox.

To perform a particular type of analysis, drag the template from the Toolbox onto your Project

Schematic below the Parameter Set bar.

When you are viewing a tab, the Toolbox will display items that can be added to the Outline viewbased on the currently selected cell. For example, if you are in a Response Surface tab and you selecta Response Point cell in the Outline view, the Toolbox will contain chart templates that you can addto that response point. Double-click on template or drag it onto the selected cell in the Outline viewto add it to the Outline. Or you can insert chart using the appropriate right-click menu option whenselecting the Charts folder or a Response Point cell in the Outline view.

7Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

The User Interface

Page 16: Design Exploration Users Guide

Component Tabs

Once a DesignXplorer system is added to the Project Schematic, you can open component tabs viathe cells in the system. Each tab is a workspace in which you can set up analysis options, run the ana-lysis, and view results. For example, if you right-click on the Design of Experiments cell and selectEdit, the corresponding Design of Experiments tab will open, showing the following configuration:

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.8

ANSYS DesignXplorer Overview

Page 17: Design Exploration Users Guide

You will see this window configuration for any DesignXplorer system component that you edit. Thefour views visible are:

• Outline: provides a hierarchy of the main objects that make up the component that you are editing.

The state icon on the root node, also referred to as the model node, tells you if the data is up-to-dateor needs to be updated. It also helps you to figure out of the impact of your changes on the compon-ent and parameter properties.

A quick help message is associated with the various states. It is particularly useful when the state is

Attention Required ( ) because it explains what is invalid in the current configuration. To displaythe quick help message, click the Information button in the Help column on the right of the modelnode

On results objects nodes (response points, charts, Min-Max search, etc.), a state icon showing if theobject is up-to-date or out of date helps you to understand the current status of the object. If youchange a Design of Experiments setting, the state icon of the corresponding chart will be updated,

given the pending changes. For the update failed state ( ) on a result object, you can try to updatethis object using the appropriate right mouse button menu option when selecting its node.

• Table: provides a tabular view of the data associated with that component. The header bar contains adescription of the table definition. Right-click on a cell of the table to export the data table in a CSV file(Comma-Separated Values).

In the Table view, input parameter values and output parameter values that are obtained from asimulation are displayed in black text. Output parameters that are based on a response surface aredisplayed in a different color, as specified in the Options dialog. For details, see Response SurfaceOptions (p. 23).

9Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

The User Interface

Page 18: Design Exploration Users Guide

• Properties: provides access to the properties of the object selected in the Outline view. For example,when a parameter is selected in the Outline, you can set its bounds. When the Response Surface nodeis selected, you can set its meta-model type and the associated options. When a chart is selected in theOutline, you can set its plotting properties, etc.

The Properties view displays output values for the charts or response points selected in the Outline

view. Input parameter values and output parameter values that are obtained from a simulation aredisplayed in black text, while output parameters that are based on a response surface are displayedin a different color, as specified in the Options dialog. For details, see Response Surface Options (p. 23).

Note

In the Properties view, the coloring convention for output parameter values is not appliedto the Calculated Minimum and Calculated Maximum values; these values are alwaysdisplayed in black text.

• Chart or Results: displays the various charts available for the different DesignXplorer system cells. Right-click on the chart to export the data chart in a CSV (comma-separated values) file. This section is labeledResults only for the Optimization tab of a Goal Driven Optimization study.

Note that you can insert and duplicate charts (or a response point with charts for the Response

Surface cell) even if the system is out of date. If the system is out of date, the charts are displayedand updated in the Chart view when the system is updated. For any DesignXplorer cell where a chartis inserted before updating the system, all types of charts supported by the cell are inserted by defaultat the end of the update. If a cell already contains a chart, then there is no new chart inserted bydefault (And for the Response Surface, if there is no response point, a response point is inserted bydefault with all charts). For more information, see Using DesignXplorer Charts (p. 16).

In general, you select a cell in the Outline view and either set up its Properties or review the Chart

and/or Table associated with that cell.

Context Menu

When you right-click on a DesignXplorer component in the Project Schematic or on the componentnode in the Outline view when editing a component, the right-click context menu provides the followingoptions relevant to the component and its state: Update, Preview, Clear Generated Data, and Refresh.These operations are performed only on the selected component.

Parameters

The following types of parameters are used in DesignXplorer:

• Input Parameters (p. 11)

• Output Parameters (p. 12)

See Working with Parameters and Design Points for more information about parameters.

See Tree Usage in Parameters and Parameter Set Tabs (p. 12) for more information about groupingparameters by application.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.10

ANSYS DesignXplorer Overview

Page 19: Design Exploration Users Guide

Input Parameters

Input parameters are those parameters that define the inputs to the analysis for the model under in-vestigation. Input parameters have predefined ranges which can be changed. These include (and arenot limited to) CAD parameters, Analysis parameters, DesignModeler parameters and Mesh parameters.CAD and DesignModeler input parameters may include length, radius, etc.; Analysis input parametersmay include pressure, material properties, materials, sheet thickness, etc.; Mesh parameters may includerelevance, number of prism layers or mesh size on an entity.

When you start a Design of Experiments analysis in ANSYS DesignXplorer, a default of +/- 10% of thecurrent value of each input parameter is used to initialize the parameter range. If any parameter has acurrent value of 0.0, then the initial parameter range will be computed as 0.0 → 10.0. Since DesignXploreris not aware of the physical limits of parameters, you will need to check that the assigned range iscompatible with the physical limits of the parameter. Ideally, the current value of the parameter will beat the midpoint of the range between the upper and lower bound, but this is not a requirement. Fordetails on defining the range for input parameters, see Defining Continuous Input Parameters (p. 188)and Defining the Range for Manufacturable Values (p. 189).

When you define a range for an input parameter, the relative variation must be equal to or greater than1e-10 in its current unit. If the relative variation is less than 1e-10, you can either adjust the variationrange or disable the parameter.

If you disable an input parameter, its initial value - which becomes editable - will be used for the designexploration study. If you change the initial value of a disabled input parameter during the study, alldepending results are invalidated. A disabled input parameter can have different initial value in eachDesignXplorer system.

Input parameters can be discrete or continuous, and each of these have specific forms.

ExampleDescriptionParameter Type

Number of holes, number ofweld points, number ofprism layers

Valid only at integer valuesDiscrete Parameters

Thickness, force, temperat-ure

Continuous non-interruptedrange of values

Continuous Parameters

Discrete parameters physically represent different configurations or states of the model. An example isthe number of holes in a geometry. Discrete parameters allow you to analyze different design variationsin the same parametric study without having to create multiple models for parallel parametric analysis.For more information, see Defining Discrete Input Parameters (p. 187).

Continuous parameters physically vary in a continuous manner between a lower and an upper bounddefined by the user. Examples are CAD dimension or load magnitude. Continuous parameters allowyou to analyze a continuous value within a defined range, with each parameter representing a directionin the design and treated as a continuous function in the DOE and Response Surface. For more inform-ation, see Defining Continuous Input Parameters (p. 188).

With continuous parameters, you can also apply a Manufacturable Values filter to the parameter. Manu-facturable Values represent real-world manufacturing or production constraints, such as drill bit sizes,sheet metal thicknesses, and readily-available bolts or screws. By applying the Manufacturable Valuesfilter to a continuous parameter, you ensure that only those parameters that realistically representmanufacturing capabilities are included in the postprocessing analysis. For more information, see DefiningManufacturable Values (p. 188).

11Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Parameters

Page 20: Design Exploration Users Guide

Output Parameters

Output parameters are those parameters that result from the geometry or are the response outputsfrom the analysis. These include (and are not limited to) volume, mass, frequency, stress, heat flux,number of elements, and so forth.

Derived Parameters

Derived parameters are specific output parameters defined as an analytical combination of output orinput parameters. As the definition suggests, derived parameters are calculated from other parametersby using equations that you provide. They are created in the system and passed into ANSYS DesignXploreras output parameters. See Working with Parameters and Design Points for more information.

Tree Usage in Parameters and Parameter Set Tabs

A Workbench project can contain parameters defined in different systems and parameters defined inthe Parameters tab and/or the Parameter Set tab. You can review the parameters of a given systemvia the Parameters cell appearing at the bottom of the system or the Parameter Set bar. To open thecorresponding tab, right-click and select Edit.

When editing the Parameter Set bar, all of the parameters of the project are listed in the Outline, underthe Input Parameters and Output Parameters nodes, depending on their nature.

The parameters are also grouped by system name to reflect the origin of the parameters and thestructure of the Project Schematic when working in parametric environments. Parameters are manip-ulated in the Parameters and Parameter Set and in the DesignXplorer tabs, so the same tree structureis used in all the tabs.

Note

Edit the system name by right-clicking the system name on the Project Schematic.

Design Points

A design point is defined by a snapshot of parameter values where output parameter values were cal-culated directly by a project update. Design points are created by design exploration; for instance, whenprocessing a Design of Experiments or a Correlation Matrix, or refining a Response Surface.

It is also possible to insert a design point at the project level from an optimization candidate design,in order to perform a validation update. Note that the output parameter values are not copied to thecreated design point since they were calculated by design exploration and are, by definition, approxim-ated. Actual output parameters will be calculated from the design point input parameters when a projectupdate is done. See Working with Design Points (p. 193) for more information on adding design pointsto the project.

You can also edit design point update process settings, including the order in which points are updatedand the location in which the update will occur. When submitting design points for update, you canspecify whether the update will be run locally on your machine or sent via RSM for remote processing.For more information, see Working with Design Points (p. 193).

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.12

ANSYS DesignXplorer Overview

Page 21: Design Exploration Users Guide

Response Points

A response point is defined by a snapshot of parameter values where output parameter values werecalculated in ANSYS DesignXplorer from a response surface. As such, the parameter values are approx-imate and calculated from response surfaces, and the most promising designs should be verified by asolve in the system using the same parameter values.

New response points can be created when editing a Response Surface cell from the Outline view orfrom the Table view. Response points as well as design points can also be inserted from the Table viewor the charts, using the appropriate right-click menu option when selecting a table row or a point ona chart. For instance, it is possible to insert a new response point from a point selected with the mouseon a Response Chart. See the description of the Response Point Table for more information on creatingcustom response points.

It is possible to duplicate a response point by selecting it from the Outline view and either choosingDuplicate in the contextual menu or using the drag and drop mechanism. This operation will attemptan update of the Response point so that the duplication of an up-to-date response point results in anup-to-date response point.

Note

• All the charts attached to a response point are also duplicated.

• Duplicating a chart which is a child of a response point by using the contextual menu operationcreates a new chart under the same response point. But by using the drag and drop mechanism,the user can create a duplicate under another response point.

Workflow

To run a parametric analysis in ANSYS DesignXplorer, you will need to:

• Create your model (either in DesignModeler or other CAD software) and load it into one of the analysissystems available in Workbench.

• Select the parameters you want to use.

• Add to the project the DesignXplorer features that you want to use.

• Set the analysis options and parameter limits.

• Update the analysis.

13Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Workflow

Page 22: Design Exploration Users Guide

• View the results of the design exploration analysis.

• Generate a project report to display the results of your analysis.

Adding Design Exploration Templates to the Schematic

You can use as many of the ANSYS DesignXplorer features as you like and rerun them with differentparameters and limits as many times as you need to refine your design. The following steps outline theprocess for putting design exploration elements into your Project Schematic:

1. Create parts and assemblies in either the DesignModeler solid modeler or any of the supported CADsystems. Features in the geometry that will be important to the analysis should be exposed as parameters.Such parameters can be passed to ANSYS DesignXplorer.

2. Drag an analysis system into the Project Schematic, connecting it to the DesignModeler or CAD file.

3. Click on the Parameter Set bar

4. In the Outline view, click on the various input parameters that you need to modify. Set the limits of theselected parameter in the Properties view.

5. Drag the design exploration template that you would like to use into the Project Schematic below theParameter Set bar.

Duplicating Existing DesignXplorer Systems

You can duplicate any DesignXplorer systems that you have created on your Project Schematic. Themanner in which you duplicate them dictates the data that is shared between the duplicated systems.

• For any DesignXplorer system, right-click on the B1 cell of the system and select Duplicate. A new systemof that type will be added to the schematic under the Parameter Set bar. No data is shared with theoriginal system.

• For a DesignXplorer system that has a DOE cell, click on the DOE cell and select Duplicate. A new systemof that type will be added to the schematic under the Parameter Set bar. No data is shared with theoriginal system.

• For a DesignXplorer system that has a Response Surface cell, click on the Response Surface cell andselect Duplicate. A new system of that type will be added to the schematic under the Parameter Set

bar. The DOE data will be shared from the original system.

• For a GDO or SSA DesignXplorer system, if you click on those cells and select Duplicate, a new systemof that type will be added to the schematic under the Parameter Set bar. The DOE and Response Surfacedata will be shared from the original system.

Any cell in the duplicated DesignXplorer system that contains data that is not shared from the originalsystem will be marked as Update Required.

While duplicating a DesignXplorer system, the definition of the user’s data (i.e. charts, responses pointsand metric objects) are also duplicated. An Update operation is then needed to calculate the resultsfor the duplicated data.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.14

ANSYS DesignXplorer Overview

Page 23: Design Exploration Users Guide

Running Design Exploration Analyses

After you have placed all of the design exploration templates on your schematic you should set up theindividual DesignXplorer systems and then run the analysis for each cell.

1. Specify design point update options in the Parameter Set Properties view. Note that these can vary fromthe global settings specified in the Tools > Options > Solution Process dialog.

2. For each cell in the design exploration system, right-click and select Edit to open the tab for that cell.On the tab, set up any analysis options that are needed. These can include parameter limits, optimizationobjectives or constraints, optimization type, Six Sigma Analysis parameter distributions, etc.

3. Run the analysis by one of the following methods:

• In the tab, right-click the cell and select Update from the context menu.

• From the Project Schematic, right-click a system cell and select Update to update the cell. To updatethe entire project, you can either click the Update Project toolbar button or right-click in the schem-atic and select Update Project.

4. Make sure that you set up and solve each cell in a DesignXplorer system to complete the analysis forthat system.

5. View the results of each analysis from the tabs of the various cells in the DesignXplorer systems (charts,tables, statistics, etc.).

When you are in the Project Schematic view, cells in the various systems contain icons to indicate theirstate. If they are out of date and need an update, you can right-click on the cell and select Update

from the menu.

Monitoring Design Exploration Analyses

After starting a design exploration analysis Update, there are two different views available to help youmonitor your analysis.

Progress View

There is a Progress view that you can open from the View menu or from Show Progress button atthe lower right corner of the window. During execution, a progress execution status bar appears in theProgress cell. The name of the task that is executing appears in the Status cell and information aboutthe execution appears in the Details cell. This view continuously reports the status of the execution,until it is complete. You can stop an execution by clicking the Stop button to the right of the progressbar and you can restart it at a later time by using any of the Update methods that you would use tostart a new Update for the cell or tab.

15Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Workflow

Page 24: Design Exploration Users Guide

Messages View

If solution errors exist and the Messages view is not open, the Show Messages button in the lowerright of the window will flash orange and indicate the number of messages. Click this button to examinesolution error information. You can also open the Messages view from the View menu in the toolbar.

Reporting on the Design Exploration Analysis

You can generate a project report to summarize and display the results of your design explorationanalysis. A project report provides a visual “snapshot” of the project at given point in time. The contentsand organization of the report reflect the layout of the Project Schematic.

For more information, see Working with Design Exploration Project Reports (p. 210).

Using DesignXplorer Charts

DesignXplorer charts are visual tools that can help you to better understand your design explorationstudy. A variety of charts is available for each DesignXplorer component and can be viewed in theCharts view of the component tab.

All of the charts available for the component are listed in the tab’s Toolbox. When you update a com-ponent for the first time, one of each chart available for that component is automatically created andinserted in the Outline view. (For the Response Surface component, a new response point is automat-ically inserted, as well). If a component already contains charts, the charts are replaced created withthe next update.

Note

Charts are inserted under a “parent” cell on the Outline view, and most charts are createdunder the Charts cell. Charts for the Response Surface component are an exception; thePredicated vs. Observed chart is inserted under the Goodness of Fit cell, while the rest areinserted under a Response Point cell.

Adding or Duplicating Charts

You can insert and duplicate charts even if the system is out of date. If the system is out of date, thecharts are displayed and updated in the Chart view when the system is updated.

Insert a new chart by either of the following methods:

• Drag a chart template from the Toolbox and drop it on the Outline view.

• In the Outline view, right-click on cell under which the chart will be added and select the Insert optionfor the type of chart.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.16

ANSYS DesignXplorer Overview

Page 25: Design Exploration Users Guide

Duplicate a chart by either of the following methods:

• To create a second instance of a chart (with default settings) or to create a new Response Surface chartunder a different response point, drag the desired chart template from the Toolbox and drop it on theparent cell.

• To create an exact copy of an existing chart, right-click on the chart and select Duplicate from the contextmenu.

For Response Surface charts, the Duplicate option will create an exact copy of the existing chart underthe same response point. To create a fresh instance of the chart type under a different response point,use the drag-and-drop operation. To create an exact copy of under a different response point, drag theexisting chart and drop it on the new response point

Chart duplication triggers a chart update. If the update succeeds, both the original chart and the duplicatewill be up-to-date.

Working with Charts

When you select a chart in the Outline view, its properties display below in the Properties view andthe chart itself displays it in the Chart view.

17Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Using DesignXplorer Charts

Page 26: Design Exploration Users Guide

In the Chart view, you can drag the mouse over various chart elements view coordinates and otherelement details.

You can change chart properties manually on the Properties view. You can also use the context-menuoptions, accessed by right-clicking directory on the chart. Available options vary according to the typeand state of the chart and allow you to perform various chart-related operations.

• To edit generic chart properties, right-click on a chart or chart element and select Edit Properties. Formore information, see Setting Chart Properties in the Workbench User's Guide.

• To add new points to your design, right-click on a chart point. Depending on the chart type, you can selectfrom the following context options: Explore Response Surface at Point, Insert as Design Point, Insert

as Refinement Point, Insert as Verification Point, or Insert as Custom Candidate Point.

• On charts that allow you to enable or disable parameters, you can:

– Right-click on a chart parameter and select (depending on the parameter selected) Disable <paramet-erID >, Disable all Input Parameters but <parameterID >, or Disable all Output Parameters but

<parameterID >.

– If at least one parameter is already disabled, you can right-click anywhere on the chart and select Reverse

Enable/Disable to enable all disabled parameters or versa.

For general information on working with charts, see Working with the Chart View in the WorkbenchUser's Guide.

Exporting and Importing Data

You can export the design point values of a selected DesignXplorer table or chart to an ASCII file, whichthen can be used by other programs for further processing. This file is primarily formatted accordingto the DesignXplorer’s “Extended CSV File Format,” which is the Comma-Separated Values standard (fileextension CSV) with the addition of several nonstandard formatting conventions. For details, see ExtendedCSV File Format (p. 281).

To export design point values to an ASCII file, right-click on a cell of a table in the Table view or right-click on a chart in the Chart view and select the Export Data menu option. The parameter values foreach design point in the table will be exported to a CSV file. The values are always exported in unitsas defined in Workbench (i.e., as when the Display Values as Defined option is selected under theUnits menu).

You can also import an external CSV file to create design points in a custom Design of Experimentscomponent, or refinement and verification points in a Response Surface component. Right-click on acell in the Table view, or a Design of Experiments or Response Surface component in the Project

Schematic view and select the appropriate Import option from the context menu.

See Working with Tables (p. 205) for more information.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.18

ANSYS DesignXplorer Overview

Page 27: Design Exploration Users Guide

Design Exploration Options

You can set the default behavior of design exploration features through the Tools>Options dialog box.These options will be used as the default values for the analyses that you set up on the Project

Schematic of your Project tab. Each system has Properties views that allow you to change the defaultvalues for any specific cell in an analysis. Changing the options in a cell's Properties view does not affectthe global default options described here in the Options dialog box. To access DesignXplorer’s defaultoptions:

1. From the main menu, choose Tools>Options. An Options dialog box appears. Expand the Design Ex-

ploration item in the tree.

19Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Design Exploration Options

Page 28: Design Exploration Users Guide

2. Click on Design Exploration or one of its sub-options.

3. Change any of the option settings by clicking directly in the option field on the right. You may need toscroll down to see all of the fields. You will see a visual indication for the kind of interaction required inthe field (examples are drop-down menus, and direct text entries). Some sections of the screens may begrayed out unless a particular option above them is selected.

4. Click OK when you are finished changing the default settings.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.20

ANSYS DesignXplorer Overview

Page 29: Design Exploration Users Guide

The following options appear in the Options dialog box:

Design Exploration

Design of Experiments

Response Surface

Sampling and Optimization

There are many general Workbench options in the Options dialog box. You can find more informationabout them in Setting ANSYS Workbench Options.

Design Exploration Options

Show Advanced Options: If selected, advanced properties are shown for DesignXplorer components.Advanced properties are displayed in italics.

The Design Points category includes:

• Preserve Design Points After DX Run: If checked, after the solution of a design exploration cell completes,save the design points created for the solution to the project Table of Design Points. If this option isselected, the following option is available:

– Export Project for Each Preserved Design Point: If selected, in addition to saving the design pointsto the project Table of Design Points, exports each of the design exploration design points as a sep-arate project in the same directory as the original project.

• Retry All Failed Design Points: If checked, DX will make additional attempts to solve all of the designpoints that failed to update during the first run. If this option is selected, the following options are available:

– Number of Retries: Allows you to specify the number of times DX will try to update the failed designpoints. Defaults to 5.

– Retry Delay (seconds): Allows you to specify the number of seconds that will elapse between tries.Defaults to 120.

The Graph category includes:

• Chart Resolution: Changes the number of points used by the individual continuous input parameter axesin the 2D/3D Response Surface charts. Increasing the number of points enhances the viewing resolutionof these charts. The range is from 2 to 100. The default is 25.

The Sensitivity category includes:

• Significance Level: Controls the relative importance or significance of input variables. The allowable rangeis from 0.0, meaning all input variables are "assumed" to be insignificant, to 1.0, meaning all input variablesare "assumed" to be significant. The default is 0.025.

• Correlation Coefficient Calculation Type: Specifies the calculation method for determining sensitivitycorrelation coefficients. The following choices are available:

– Rank Order (Spearman) (default): Correlation coefficients are evaluated based on the rank of samples.

– Linear (Pearson): Correlation coefficients are evaluated based on the sample values.

The Parameter Options category includes:

21Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Design Exploration Options

Page 30: Design Exploration Users Guide

• Display Parameter Full Name:

– If checked (default), displays the full parameter name (i.e. “P1 — Thickness”) wherever possible (dependingon the available space in the interface).

– If unchecked, displays the short parameter name (i.e. “P1”) in the design exploration user interface, exceptin the Outline view and in tables where parameters are listed vertically.

• Parameter Naming Convention: Sets the naming style of input parameters within design exploration.The following choices are available:

– Taguchi Style: Names the parameters as Continuous Variables and Noise Variables.

– Uncertainty Management Style (default): Names the parameters as Design Variables and UncertaintyVariables.

– Reliability Based Optimization Style: Names the parameters as Design Variables and Random Variables.

The Messages category includes:

• Confirm if Min-Max Search can take a long time: Allows you to specify whether you should be alerted,before performing a Min-Max Search when there are discrete or Manufacturable Values input parameters,that the search may be very time-consuming. When selected (default), an alert will always be displayedbefore a Min-Max Search including discrete or Manufacturable values parameters. When deselected, thealert will not be displayed.

• Alert if ‘Design Point Update Order” is different from the recommended setting: Allows you to specifywhether you should be alerted, before a design point update operation, that the design point updateorder has been changed from the recommended setting. When selected (default) and the recommendeddesign point update order has been modified, an alert will always be displayed before the design pointsare updated. When deselected, the alert will not be displayed.

Design of Experiments Options

The Design of Experiments Type category includes the following settings. Descriptions of each settingare included in the Working with Design Points (p. 193) section under "DesignXplorer Systems andComponents" (p. 27).

• Central Composite Design (default): Note that changing the CCD type in the Options dialog box willgenerate new design points, provided the study has not yet been solved.

• Optimal Space-Filling Design

• Box-Behnken Design

• Latin Hypercube Sampling Design

When Central Composite Design is selected, the following options are available in the Central Composite

Design Options section:

• Design Type: The following options are available:

– Face-Centered

– Rotatable

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.22

ANSYS DesignXplorer Overview

Page 31: Design Exploration Users Guide

– VIF-Optimality

– G-Optimality

– Auto-Defined

• Template Type: Enabled for the Rotatable and Face-Centered design types. The following options areavailable:

– Standard

– Enhanced

When Optimal Space-Filling Design or Latin Hypercube Sampling Design is selected, the followingoptions are available in the Latin Hypercube Sampling or Optimal Space-Filling section:

• Design Type: The following choices are available:

– Max-Min Distance (default)

– Centered L2

– Maximum Entropy

• Maximum Number of Cycles: Indicates the maximum number of iterations which the base DOE undergoesfor the final sample locations to conform to the chosen DOE type.

• Samples Type: This indicates the specific method chosen to determine the number of samples. The fol-lowing options are available:

– CCD Samples (default): Number of samples is the same as that of a corresponding CCD design.

– Linear Model Samples: Number of samples is the same as that of a design of linear resolution.

– Pure Quadratic Model Samples: Number of samples is the same as that of a design of pure quadratic(constant and quadratic terms) resolution.

– Full Quadratic Model Samples: Number of samples is the same as that of a design of full quadratic(all constant, quadratic and linear terms) resolution.

– User-Defined Samples: User determines the number of DOE samples to be generated.

For details on the different types of samples, see Optimal Space-Filling Design (OSF) (p. 65) and LatinHypercube Sampling Design (LHS) (p. 68).

• Number of Samples: Specifies the default number of samples. Defaults to 10.

• Number of Samples: For a Sample Type of User Defined Samples, specifies the default number ofsamples.

Response Surface Options

The Type setting includes the following options:

• Standard Response Surface - Full 2nd Order Polynomial (default):

23Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Design Exploration Options

Page 32: Design Exploration Users Guide

• Kriging: Accurate interpolation method. If selected, the following Kriging Options are available:

– Kernel Variation Type, which can be set to:

→Variable Kernel Variation (default): Pure Kriging mode.

→Constant Kernel Variation: Radial Basis Function mode.

• Non-Parametric Regression: Provides improved response quality. Initialized with one of the availableDOE types.

• Neural Network: Nonlinear statistical approximation method inspired from biological neural processing.If selected, the following Neural Network Option is available:

– Number of Cells: The number of cells controls the quality of the meta-model. The higher it is, thebetter the network can capture parameters interactions. The recommended range is from 1 to 10.The default is 3.

Once a meta-model is solved, it is possible to switch to another meta-model type or change the optionsfor the current meta-model in the Properties view of theResponse Surface tab. After changing theoptions in the Properties view, you must Update the Response Surface to obtain the new fitting.

The Color for Response Surface Based Output Values setting allows you to select the color used todisplay output values that are calculated from a response surface. (Simulation output values that havebeen calculated from a design point update are displayed in black.) The color you select here will beapplied to response surface based output values in the Properties and Table views of all componentsand in the Results view of the Optimization component (specifically, for the Candidate Points chartand the Samples chart). For example:

• The DOE table, derived parameters with no output parameter dependency, verified candidate points, andall output parameters calculated in a Direct Optimization system are simulation-based, and so are displayedin black text.

• The table of response points, the Min/Max search table, and candidate points in a Response Surface Op-timization system are based on response surfaces, and so are displayed in the color specified by this option.

In a Direct Optimization, derived and direct output parameters are all calculated from a simulation andso are all displayed in black. In a Response Surface Optimization, the color used for derived values de-pends on the definition (expression) of the derived parameter. If the expression of the parameter dependson at least one output parameter, either directly or indirectly, the derived values are considered to bebased on a response surface and so will be displayed in the color specified by this option.

Sampling and Optimization Options

The Random Number Generation category includes:

• Repeatability: When checked, seeds the random number generator to the same value each time yougenerate uncertainty analysis samples. With Repeatability unchecked , the random number generator isseeded with the system time every time you generate samples. This applies to all methods in design ex-ploration where random numbers are needed, as in Six Sigma Analysis or Goal Driven Optimization. Thedefault is checked.

The Weighted Latin Hypercube category includes:

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.24

ANSYS DesignXplorer Overview

Page 33: Design Exploration Users Guide

• Sampling Magnification: To indicate number of times used to reduce regular Latin Hypercubesamples while achieving a certain probability of failure (Pf ). For example, the lowest probability offailure for 1000 Latin Hypercube samples is approximately 1/1000; magnification of 5 is meant to use200 weighted/biased Latin Hypercube samples to approach the lowest probability of 1/1000. It is re-commended not to use magnification greater than 5. A greater magnification may result in significantPf error due to highly biased samples. The default is 5.

The Optimization category includes:

• Constraint Handling

This option can be used for any optimization application and is best thought of as a "constraint sat-isfaction" filter on samples generated from the Screening, MOGA, NLPQL, or Adaptive Single-Objectiveruns. This is especially useful for Screening samples to detect the edges of solution feasibility forhighly constrained nonlinear optimization problems. The following choices are available:

– Relaxed: Implies that the upper, lower, and equality constrained objectives of the candidate pointsshown in the Table of Optimization are treated as objectives; thus any violation of the objectives isstill considered feasible.

– Strict (default): When chosen, the upper, lower, and equality constraints are treated as hard constraints;that is, if any of them are violated, then the candidate is no longer displayed. So, in some cases nocandidate points may be displayed, depending on the extent of constraint violation.

25Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Design Exploration Options

Page 34: Design Exploration Users Guide

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.26

Page 35: Design Exploration Users Guide

DesignXplorer Systems and Components

The topics in this section provide an introduction to using specific DesignXplorer systems and their in-dividual components for your design exploration projects.

What is Design Exploration?DesignXplorer SystemsDesignXplorer Components

What is Design Exploration?

Design Exploration is a powerful approach used by DesignXplorer for designing and understanding theanalysis response of parts and assemblies. It uses a deterministic method based on Design of Experiments(DOE) and various optimization methods, with parameters as its fundamental components. Theseparameters can come from any supported analysis system, DesignModeler, and various CAD systems.Responses can be studied, quantified, and graphed. Using a Goal Driven Optimization method, the de-terministic method can obtain a multiplicity of design points. You can also explore the calculated responsesurface and generate design points directly from the surface.

After setting up your analysis, you can pick one of the system templates from the Design Exploration

toolbox to:

• parameterize your solution and view an interpolated response surface for the parameter ranges

• view the parameters associated with the minimum and maximum values of your outputs

• create a correlation matrix that shows you the sensitivity of outputs to changes in your input parameters

• set output objectives and see what input parameters will meet those objectives

• perform a Six Sigma Analysis on your model

DesignXplorer Systems

The following DesignXplorer systems will be available if you have installed the ANSYS DesignXplorerproduct and have an appropriate license:

Parameters Correlation SystemResponse Surface SystemGoal Driven Optimization SystemsSix Sigma Analysis System

Parameters Correlation System

You can use the Parameters Correlation system to identify the major input parameters. This is achievedby analyzing their correlation and the relative weight of the input parameters for each output parameter.

Indeed, when your project has many input parameters (more than 10 or 15), building an accurate re-sponse surface becomes an expensive process. So it is recommended to identify the major input para-meters with the Parameters Correlation feature in order to disable the minor input parameters when

27Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 36: Design Exploration Users Guide

building the response surface. With less input parameters, the response surface will be more accurateand less expensive to build.

The Parameters Correlation system contains the following component:

• Parameters Correlation

Response Surface System

There is one response surface for every output parameter. Output parameters are represented in termsof the input parameters, which are treated as independent variables.

For the deterministic method, response surfaces for all output parameters are generated in two steps:

• Solving the output parameters for all design points as defined by a Design of Experiments

• Fitting the output parameters as a function of the input parameters using regression analysis techniques

The Response Surface system contains the following cells:

• Design of Experiments

• Response Surface

Goal Driven Optimization Systems

You can use Goal Driven Optimization to state a series of design goals, which will be used to generateoptimized designs. Both objectives and constraints can be defined and assigned to each output para-meter. Goals may be weighted in terms of importance.

There are two types of Goal Driven Optimization systems: Response Surface Optimization and Direct

Optimization.

The Response Surface Optimization GDO system contains the following components:

• Design of Experiments

• Response Surface

• Optimization

The Direct Optimization GDO system contains the following component:

• Optimization

Six Sigma Analysis System

Six Sigma Analysis (SSA) is an analysis technique for assessing the effect of uncertain input parametersand assumptions on your model.

A Six Sigma Analysis allows you to determine the extent to which uncertainties in the model affect theresults of an analysis. An "uncertainty" (or random quantity) is a parameter whose value is impossibleto determine at a given point in time (if it is time-dependent) or at a given location (if it is location-dependent). An example is ambient temperature: you cannot know precisely what the temperature willbe one week from now in a given city.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.28

DesignXplorer Systems and Components

Page 37: Design Exploration Users Guide

The Six Sigma Analysis system contains the following components:

• Design of Experiments (SSA)

• Response Surface (SSA)

• Six Sigma Analysis

DesignXplorer Components

DesignXplorer systems are made up of one or more components. Double-clicking on a component inthe Project Schematic opens the tab for that component. The tab will contain the following views:Outline, Properties, Table, and Chart. The content of the sections varies depending on the object se-lected in the Outline view. The following topics summarize the tabs of the various components thatmake up the design exploration systems.

Design of Experiments Component ReferenceParameters Correlation Component ReferenceResponse Surface Component ReferenceOptimization Component ReferenceSix Sigma Analysis Component Reference

Design of Experiments Component Reference

Design of Experiments (DOE) is a technique used to determine the location of sampling points and isincluded as part of the Response Surface, Goal Driven Optimization, and Six Sigma Analysis systems.

A Design of Experiments cell is part of the Response Surface, Goal Driven Optimization, and Six Sigmasystems (although the DOE for a Six Sigma Analysis system treats the inputs and outputs differentlyand cannot share data with the other types of DOEs). The Design of Experiments tab allows you topreview (generates the points but does not solve them) or generate and solve a DOE Design Pointmatrix. On the DOE tab, you can set the input parameter limits, set the properties of the DOE solve,view the Table of Design Points, and view several parameter charts.

Related Topics:

• "Using Design of Experiments" (p. 63)

• Working with Design Points (p. 193)

• Design of Experiments Options (p. 22)

The following views in the Design of Experiments tab allow you to customize your DOE and view theupdated results:

Outline: Allows you to:

• Select the Design of Experiments cell and change its properties.

• Select the input parameters and change their limits.

• Select the output parameters and view their minimum and maximum values.

• Select one of the available charts for display. You can insert directly into the Outline as many new chartsfrom the chart toolbox as you want. When a chart is selected, you can change the data properties of thechart (chart type and parameters displayed).

29Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

DesignXplorer Components

Page 38: Design Exploration Users Guide

Properties: (See Setting Up the Design of Experiments (p. 63) for complete descriptions of DOE options.)

• Preserve Design Points After DX Run: Select this check box if you want to retain design points at theproject level from each update of this DOE you perform. If this property is set, the following property isavailable:

– Export Project for Each Preserved Design Point: If selected, in addition to saving the design pointsto the project Table of Design Points, exports each of the design exploration design points as a sep-arate project in the same directory as the original project..

• Number of Retries: Indicates the number of times DesignXplorer will try to update the failed designpoints. If the Retry All Failed Design Points option is not selected in the Options dialog, defaults to 0.Otherwise, defaults to the value specified for the Number of Retries option.

– Retry Delay (seconds): If the Number of Retries property is not set to 0, indicates how many secondswill elapse between tries.

• Design of Experiments Type

– Central Composite Design

– Optimal Space-Filling Design

– Box-Behnken Design

– Sparse Grid Initialization

– Custom

– Custom + Sampling

– Latin Hypercube Sampling Design

• Settings for selected Design of Experiments Type, where applicable

Table: Displays a matrix of the design points that populates automatically during the solving of thepoints. Displays input and output parameters for each design point. You can add points manually if theDOE Type is set to Custom.

Chart: Displays the available charts described below.

Related Topics:

Parameters Parallel ChartDesign Points vs Parameter Chart

Parameters Parallel Chart

Generates a graphical display of the DOE matrix using parallel Y axes to represent all of the inputs andoutputs. Select the chart from the Outline to display it in the Chart view. Use the Properties view asfollows:

Properties:

• Display Full Parameter Name: Select to display the full parameter name in the results.

• Use the Enabled check box to enable or disable the display of parameter axes on the chart.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.30

DesignXplorer Systems and Components

Page 39: Design Exploration Users Guide

• Click on a line on the chart to display input and output values for that line in the Input Parameters andOutput Parameters sections of the Properties view.

• Change various generic chart properties.

The chart plots only the updated design points. If the DOE matrix does not contain any updated designpoint yet, the output parameters are automatically disabled from the chart. All design points are plotted;the axes corresponding to the input parameters being the only one visible.

You can also look at this chart data a different way. If you right-click on the chart background, selectEdit Properties, and then change Chart Type to Spider Chart, the chart which will show all input andoutput parameters arranged in a set of radial axes spaced equally. Each DOE point is represented by acorresponding envelope defined in these radial axes.

Design Points vs Parameter Chart

Generates a graphical display for plotting design points vs. any input or output parameter. Select thechart cell in the Outline to display it in the Chart view. Use the Properties view as follows:

Properties:

• Display Full Parameter Name: Select to display the full parameter name in the results.

• X-Axis (top,bottom), Y-Axis (right,left): Design points (X top and bottom axes) can be plotted againstany of the input and output parameters (any of the axes).

• Change various generic chart properties.

Parameters Correlation Component Reference

A linear association between parameters is evaluated using Spearman’s or Pearson’s product-momentcoefficient. A correlation/sensitivity matrix is generated to demonstrate correlation between input andinput parameters, and sensitivity of output to input parameters.

A nonlinear (quadratic) association between parameters is evaluated using coefficient of determinationof quadratic regression between parameters. A determination matrix is generated between the para-meters to convey information of quadratic correlation if there is any and it isn’t detectable with linearSpearman’s or Pearson’s correlation coefficient.

Related Topics:

• "Using Parameters Correlation" (p. 51)

• Working with Design Points (p. 193)

• Correlation Coefficient Theory

The following views in the Parameters Correlation tab allow you to customize your search and viewthe results:

Outline: allows you to:

• Select the Parameters Correlation cell and change its properties and view the number of samples gen-erated for this correlation.

31Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

DesignXplorer Components

Page 40: Design Exploration Users Guide

• Select the input parameters and change their limits.

• Select the output parameters and view their minimum and maximum values.

• Select one of the available charts for display. When a chart is selected, you can change the data propertiesof the chart (chart type and parameters displayed).

Properties:

• Preserve Design Points After DX Run: Select this check box if you want to retain design points at theproject level from this parameters correlation. If this property is set, the following property is available:

– Export Project for Each Preserved Design Point: If selected, in addition to saving the design pointsto the project Table of Design Points, exports each of the design exploration design points as a sep-arate project in the same directory as the original project..

Note

There is no way to view the design points created for a parameters correlation other thanpreserving them at the project level.

• Number of Retries: Indicates the number of times DesignXplorer will try to update the failed designpoints. If the Retry All Failed Design Points option is not selected in the Options dialog, defaults to 0.Otherwise, defaults to the value specified for the Number of Retries option. Available only for correlationsnot linked to a response surface.

– Retry Delay (seconds): If the Number of Retries property is not set to 0, indicates how much timewill elapse between tries.

• Reuse the samples already generated – select this check box if you want to reuse the samples generatedin a previous correlation.

• Choose Correlation Type algorithm:

– Spearman

– Pearson

• Number of Samples – maximum number of samples to generate for this correlation.

• Choose Auto Stop Type (p. 53):

– Enable Auto Stop

→Mean Value Accuracy: Available if Auto Stop is enabled. Set the desired accuracy for the mean valueof the sample set.

→Standard Deviation Accuracy: Available if Auto Stop is enabled. Set the desired accuracy for thestandard deviation of the sample set.

→Convergence Check Frequency: Available if Auto Stop is enabled. Number of simulations to executebefore checking for convergence.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.32

DesignXplorer Systems and Components

Page 41: Design Exploration Users Guide

→Size of Generated Sample Set: Available if Auto Stop is enabled. Number of samples generated forthe correlation solution.

– Execute All Simulations

Table: Displays both a correlation matrix and a determination matrix for the input and output parameters.

Chart: Displays the available charts described below.

The charts include:Correlation Scatter ChartCorrelation Matrix ChartDetermination Matrix ChartSensitivities ChartDetermination Histogram Chart

Correlation Scatter Chart

Shows a sample scatter plot of a parameter pair. Two trend lines, linear and quadratic curves, are addedto the sample points of the parameter pair. The trend line equations are displayed in the chart legend.The chart conveys a graphical presentation of the degree of correlation between the parameter pair inlinear and quadratic trends.

You can create a Correlation Scatter chart for a given parameter combination by right-clicking on theassociated cell in the Correlation Matrix chart and selecting Insert <x-axis> vs <y-axis> Correlation

Scatter.

To view the Correlation Scatter chart, select Correlation Scatter under the Chart node of the Outline

view. Use the Properties view as follows:

Properties:

• Choose the parameters to display on the X Axis and Y Axis.

33Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

DesignXplorer Components

Page 42: Design Exploration Users Guide

• Enable or disable the display of the Quadratic and Linear trend lines.

• View the Coefficient of Determination and the equations for the Quadratic and Linear trend lines.

• Change various generic chart properties.

Related Topics:

• Viewing the Quadratic Correlation Information (p. 55)

Correlation Matrix Chart

Provides information about the linear correlation between a parameter pair, if any. The degree of cor-relation is indicated by color in the matrix. Placing your cursor over a particular square will show youthe correlation value for the two parameters associated with that square.

To view the Correlation Matrix chart, select Correlation Matrix under the Chart node of the Outline

view. Use the Properties view as follows:

Properties:

• Enable or disable the parameters that are displayed on the chart.

• Change various generic chart properties.

Related Topics:

• Viewing Significance and Correlation Values (p. 55)

Determination Matrix Chart

Provides information about the nonlinear (quadratic) correlation between a parameter pair, if any. Thedegree of correlation is indicated by color in the matrix. Placing your cursor over a particular squarewill show you the correlation value for the two parameters associated with that square.

To view the Determination Matrix chart, select Determination Matrix under the Chart node of theOutline view. Use the Properties view as follows:

Properties:

• Enable or disable the parameters that are displayed on the chart.

• Change various generic chart properties.

Related Topics:

• Viewing the Quadratic Correlation Information (p. 55)

Sensitivities Chart

Allows you to graphically view the global sensitivities of each output parameter with respect to theinput parameters.

To view the Sensitivities chart, select Sensitivities under the Chart node of the Outline view. Use theProperties view as follows:

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.34

DesignXplorer Systems and Components

Page 43: Design Exploration Users Guide

Properties:

• Chart Mode: Set to Bar or Pie.

• Enable or disable the parameters that are displayed on the chart.

• Change various generic chart properties.

Related Topics:

• Working with Sensitivities (p. 204)

• Viewing Significance and Correlation Values (p. 55)

Determination Histogram Chart

Determination Histogram charts allow you to see the impact of the input parameters for a given outputparameter.

The full model R2 represents the variability of the output parameter that can be explained by a linear(or quadratic) correlation between the input parameters and the output parameter.

The value of the bars corresponds to the linear (or quadratic) determination coefficient of each inputassociated to the selected output.

To view the Determination Histogram chart, select Determination Histogram under the Chart nodeof the Outline view. Use the Properties view as follows:

Properties:

• Determination Type: set to Linear or Quadratic.

• Threshold R2: enables you to filter the input parameters by hiding the input parameters with a determin-

ation coefficient lower than the given threshold.

• Choose the output parameter

• Change various generic chart properties.

Response Surface Component Reference

Builds a response surface from the DOE design points' input and output values based on the chosenResponse Surface Type. In the Response Surface tab, you can view the input parameter limits andinitial values, set the properties of the response surface algorithm, view the Response Points table, andview several types of Response charts.

Related Topics:

• Working with Design Points (p. 193)

• "Using Response Surfaces" (p. 77)

• Standard Response Surface - Full 2nd-Order Polynomial (p. 77)

• Kriging (p. 78)

35Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

DesignXplorer Components

Page 44: Design Exploration Users Guide

• Non-Parametric Regression (p. 83)

• Neural Network (p. 83)

• Sparse Grid (p. 84)

• Performing a Manual Refinement (p. 92)

• How Kriging Auto-Refinement Works (p. 79)

• Goodness of Fit (p. 92)

• Min-Max Search (p. 98)

The following views in the Response Surface tab allow you to customize your response surface andview the results:

Outline: Allows you to:

• Select the Response Surface cell and change its properties, including Response Surface Type.

• Select input parameters to view their properties.

• Select the output parameters and view their minimum and maximum values, and set their transformationtype.

• Enable or disable the Min-Max Search and select the Min-Max Search cell to view the table of results.

• View Goodness of Fit information for all response surfaces for the system.

• Select a response point and view its input and output parameter properties.

– You can change the input values in the Properties view and see the corresponding output values.

– From the right-click context menu, you can reset the input parameter values of the selected responsepoint to the initial values that were used to solve for the response surface.

– In the Properties view, you can enter notes about the selected response point.

• Select added response points and view their properties. Drag charts from the toolbox to the responsepoints to view data for the added response points. You can insert a new chart to the response point viathe right-click menu.

• Select one of the available charts for display. When a chart is selected, you can change the data propertiesof the chart (chart type and parameters are displayed).

Properties:

• Preserve Design Points After DX Run: Select this check box if you want to retain at the project leveldesign points that are created when refinements are run for this response surface. If this property is set,the following property is available:

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.36

DesignXplorer Systems and Components

Page 45: Design Exploration Users Guide

– Export Project for Each Preserved Design Point: If selected, in addition to saving the design pointsto the project Table of Design Points, exports each of the design exploration design points as a sep-arate project in the same directory as the original project..

Note

Selecting this box will not preserve any design points unless you run either a manual re-finement or one of the Kriging refinements, since the Response Surface uses the designpoints generated by the DOE. If the DOE of the Response Surface does not preserve designpoints, when you perform a refinement, only the refinement points will be preserved atthe project level. If the DOE is set to preserve design points, and the Response Surface isset to preserve design points, when you perform a refinement the project will contain theDOE design points and the refinement points.

• Number of Retries: Indicates the number of times DesignXplorer will try to update the failed designpoints. If the Retry All Failed Design Points option is not selected in the Options dialog, defaults to 0.Otherwise, defaults to the value specified for the Number of Retries option.

– Retry Delay (seconds): If the Number of Retries property is not set to 0, indicates how much timewill elapse between tries.

• Choice of Response Surface Type algorithm

– Standard Response Surface – Full 2nd-Order Polynomial

– Kriging

– Non-Parametric Regression

– Neural Network

– Sparse Grid

• Refinement Type: The Manual option is available for all Response Surface Types (except Sparse Grid,which only supports auto-refinement), allowing you to enter refinement points manually; the Auto-Re-

finement option is also available for the Kriging Response Surface Type.

• Generate Verification Points: Select this check box to have the response surface generating and calcu-lating the number of verification points of your choice (1 by default). The results are included in the tableand chart of the Goodness of Fit information.

• Settings for selected algorithm, as applicable

Table:

• Displays a list of the refinement points created for this Response Surface. Refinement points createdmanually and generated by using the Kriging refinement capabilities are listed in this table. Add amanual refinement point by entering a value for one of the inputs on the New Refinement Point row ofthe table.

• Displays a list of the response points for the response surface. Displays input and output parameters foreach response point. You can change the points in the table and add new ones in several different ways:

37Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

DesignXplorer Components

Page 46: Design Exploration Users Guide

– You can change the input values of a response point and see the corresponding output value.

– You can reset the input parameter values of a response point to the initial values that were used tosolve for the Response Surface by right-clicking on the response point in the table and selecting Reset

Response Point.

– You can add response points or refinement points to the table by right-clicking on a chart such as theResponse Surface chart to select a point, or selecting one or several rows in a table (for example, theMin-Max Search or the optimization Candidates tables), and selecting Explore Response Surface at

Point, or Insert as Refinement Point(s).

– You can create new response points by directly entering values into the input parameter cells of therow designated with an asterisk (*).

• Displays a list of the verification points for the response surface. Verification points created manually andgenerated automatically during the response surface update are listed in this table. Various operationsare available for the verification points:

– You can add a verification point by entering the input parameter values.

– You can delete one or several verification points using the right-click Delete operation.

– You can update one or several verification points using the right-click Update operation, which performsa design point update for each of the selected points.

Chart: Displays the available charts described below for each response point.

Related Topics:

Response ChartLocal Sensitivity ChartsSpider Chart

Response Chart

The Response chart allows you to graphically view the impact that changing each input parameter hason the displayed output parameter. Select the Response cell in the Outline to display the Responsechart in the Chart view.

You can add response points to the Response Surface Table by right-clicking on the Response chartand selecting Explore Response Surface at Point, Insert as Design Point, Insert as Refinement Point

or Insert as Verification Point. Use the Properties view as follows:

Properties:

• Display Parameter Full Name: Specify whether the full parameter name or the parameter ID will be dis-played on the chart.

• Mode: Set to 2D, 3D, or 2D Slices.

• Chart Resolution Along X: Set the number of points that you want to appear on the X-axis responsecurve. Defaults to 25.

• Chart Resolution Along Y: Set the number of points that you want to appear on the Y-axis responsecurve (for 3D). Defaults to 25.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.38

DesignXplorer Systems and Components

Page 47: Design Exploration Users Guide

• Number of Slices: Set the number of slices (for 2D Slices with continuous input parameters).

• Show Design Points: Specify whether design points will be displayed on the chart.

• Choose the input parameter(s) to display on the X Axis, Y Axis (3D), and Slice Axis (2D Slices).

• Choose the output parameter to display on the Z Axis (3D) or Y Axis (2D and 2D Slices).

• Use the sliders or drop-down menus to change the values of the input parameters that are not displayedto see how they affect the values of displayed parameters. You can enter specific values in the boxesabove the sliders.

• View the interpolated output parameter values for the selected set of input parameter values.

• Change various generic chart properties.

Related Topics:

• Response Surface Charts (p. 99)

Local Sensitivity Charts

Allows you to graphically view the impact that changing each input parameter has on the outputparameters. Select the Local Sensitivity or Local Sensitivity Curves chart cell under Response Point

in the Outline to display the chart in the Chart view. Use the Properties view as follows:

Properties:

• Chart Mode: Set to Bar or Pie (Local Sensitivity chart).

• Axis Range: Set to Use Min Max of the Output Parameter or Use Curves Data (Local Sensitivity Curveschart).

• Chart Resolution: Set the number of points per curve. Defaults to 25.

• Use the sliders to change the values of the input parameters to see how the sensitivity is changed foreach output.

• View the interpolated output parameter values for the selected set of input parameter values.

• Change various generic chart properties.

Related Topics:

• Response Surface Charts (p. 99)

• Using Local Sensitivity Charts (p. 112)

Spider Chart

Enables you to visualize the impact that changing the input parameters has on all of the output para-meters simultaneously. Select the Spider chart cell in the Outline to display the chart in the Chart

view. Use the Properties view as follows:

Properties:

39Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

DesignXplorer Components

Page 48: Design Exploration Users Guide

• Use the sliders to change the values of the input parameters to see how they affect the output parameters.

• View the interpolated output parameter values for the selected set of input parameter values.

• Change various generic chart properties.

You can also look at this chart data a different way. First, right-click on the chart background, selectEdit Properties, and change Chart Type to Parallel Coordinate. Then click on the Spider cell underResponse Point in the Outline view and use the input parameter sliders in Properties view to seehow specific settings of various input parameters change the output parameters, which are arrangedin parallel 'Y' axes in the chart.

Related Topics:

• Response Surface Charts (p. 99)

• Using the Spider Chart (p. 112)

Optimization Component Reference

Goal Driven Optimization (GDO) is a constrained, multi-objective optimization technique in which the“best” possible designs are obtained from a sample set given the objectives or constraints you set forparameters. Each type of GDO system (Response Surface Optimization and Direct Optimization)contains an Optimization component. The DOE and Response Surface cells are used in the same wayas described previously in this section. For Direct Optimization, the creation of data transfer links is alsopossible.

Related Topics:

• "Using Goal Driven Optimization" (p. 125)

• Understanding Goal Driven Optimization (p. 224)

The following views in the Optimization tab allow you to customize your GDO and view the results:

Outline: Allows you to select the following nodes and perform related actions in the tab:

• Optimization node:

– Change optimization properties and view the size of the generated sample set.

– View an optimization summary with details on the study, method, and returned candidate points.

– View the Convergence Criteria chart for the optimization. See Using the Convergence Criteria Chart (p. 162).

• Objectives and Constraints node:

– View the objectives and constraints defined in the project.

– Enable or disable objectives and constraints.

– Select an objective or constraint and view its properties, the calculated minimum and maximum valuesof each of the outputs, and History chart. See History Chart (p. 43) for details.

• Domain node:

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.40

DesignXplorer Systems and Components

Page 49: Design Exploration Users Guide

View the available input parameters for the optimization.–

– Enable or disable input parameters or parameter relationships.

– Select an input parameter or parameter relationship to view or edit its properties, or to see its Historychart. See History Chart (p. 43) for details.

• Raw Optimization Data node: For Direct Optimization systems, when an optimization update is finished,the design point data calculated during the optimization is saved by DesignXplorer. The data can be ac-cessed by clicking the Raw Optimization Data node in the Outline view.

Note

The design point data is displayed without analysis or optimization results; the data doesnot show feasibility, ratings, Pareto fronts, etc.

• Convergence Criteria node: Select this node to view the Convergence Criteria chart and specify the cri-teria to be displayed. See Using the Convergence Criteria Chart (p. 162).

• Results node:

Select one of the results types available to view results in the Charts view and in some cases, in theTable view. When a result is selected, you can change the data properties of its related chart (x- andy-axis parameters, parameters displayed on bar chart, etc.) and edit its table data.

Properties: When the Optimization node is selected in the Outline view, the Properties view allowsyou to specify:

• Method Name

– MOGA

– NLPQL

– MISQP

– Screening

– Adaptive Single-Objective

– Adaptive Multiple-Objective

– External optimizers as defined by the optimization extension(s) loaded to the project. For more inform-ation, see Performing an Optimization with an External Optimizer (p. 146).

• Relevant settings for the selected Method Name. Depending on the method of optimization, these caninclude specifications for samples, sample sets, number of iterations, and allowable convergence or Paretopercentages.

See Goal Driven Optimization Methods (p. 128) for more information.

Table: Before the update, allows you to specify the following input parameter domain settings andobjective/constraint settings:

41Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

DesignXplorer Components

Page 50: Design Exploration Users Guide

• Optimization Domain

– Set the Upper Bound and Lower Bound for each input parameter. For NLPQL and MISQP optimization,there is also a Starting Value setting.

– Set the Left Expression, Right Expression, and Operator for each parameter relationship.

See Defining the Optimization Domain (p. 148) for more information.

• Optimization Objectives and Constraints

– You can define an Objective and/or a Constraint for each parameter. Options vary according to para-meter type.

– For any parameter with an Objective Type of Seek Target, you can specify a Target.

– For any parameter with a Constraint Type of Lower Bound <= Values <= Upper Bound, you canspecify a target range with the Lower Bound and Upper Bound.

– For any parameter with an Objective or Constraint defined, you can specify the relative Objective

Importance or Constraint Importance of that parameter in regard to the other objectives.

– For any parameter with a Constraint defined (i.e., output parameters, discrete parameters, or continuousparameters with Manufacturable Values), you can specify the Constraint Handling for that parameter.

See Defining Optimization Objectives and Constraints (p. 153) for more information on constraintsand constraint handling options.

During a Direct Optimization update, if you select any objective, constraint, or input parameter fromthe Outline view, the Table view shows all of the design points being calculated by the optimization.For iterative optimization methods, the display is refreshed dynamically after each iteration, allowingyou to track the progress of the optimization by simultaneously viewing design points in the Table

view, the History charts in the Charts view, and the History chart sparklines in the Outline view. (Forthe Screening optimization method, these objects are updated only after the optimization has completed.)

After the update, when you select Candidate Points under the Results node in the Outline view, theTable view displays up to the maximum number of requested candidates generated by the optimization.The number of gold stars or red crosses displayed next to each objective-driven parameter indicatehow well the parameter meets the stated objective, from three red crosses (the worst) to three goldstars (the best). The Table view also allows you to add and edit your own candidate points, view valuesof candidate point expressions, and calculates the percentage of variation for each parameter for whichan objective has been defined. For more information, see Working with Candidate Points (p. 157).

Note

Goal-driven parameter values with inequality constraints receive either three stars (theconstraint is met) or three red crosses (the constraint is violated).

Predicted output values can be verified for each candidate by using the contextual menu entry VerifyCandidates by Design Point Update (p. 161).

Results: The following results types are available:

• Convergence Criteria Chart (p. 43)

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.42

DesignXplorer Systems and Components

Page 51: Design Exploration Users Guide

• History Chart (p. 43)

• Candidate Points Results (p. 44)

• Tradeoff Chart (p. 44)

• Samples Chart (p. 45)

• Sensitivities Chart (p. 45)

Convergence Criteria Chart

Allows you to view the evolution of the convergence criteria for each of the iterative optimizationmethods and is updated after each iteration. It is not available for the Screening optimization method.

The Convergence Criteria chart is the default optimization chart, so displays in the Chart view unlessanother type of chart is selected. When the Convergence Criteria node is selected in the Outline view,the Properties view displays the convergence criteria relevant to the selected optimization method inread-only mode. Various generic chart properties can be changed for the Convergence Criteria chart.

The chart remains available when the optimization update is complete. The legend shows the color-coding for the convergence criteria.

Related Topics:

• Using the Convergence Criteria Chart (p. 162)

• Using the Convergence Criteria Chart for Multiple-Objective Optimization (p. 163)

• Using the Convergence Criteria Chart for Single-Objective Optimization (p. 164)

History Chart

Allows you to view the history of a single enabled objective/constraint, input parameter, or parameterrelationship during the update process. For iterative optimization methods, the History chart is updatedafter each iteration. For the Screening optimization method, it is updated only when the optimizationis complete.

Select an item under the Objectives and Constraints node or an input parameter or parameter rela-tionship under the Domain node in the Outline view. When an object is selected, the Properties viewdisplays various properties for the object. Various generic chart properties can be changed for bothtypes of History chart.

In the Chart view, the color-coded legend allows you to interpret the chart. In the Outline view, asparkline graphic of the History chart is displayed next to each objective/constraint and input parameterobject.

Related Topics:

• Using the History Chart (p. 165)

• Working with the History Chart in the Chart View (p. 166)

• Viewing History Chart Sparklines in the Outline View (p. 169)

43Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

DesignXplorer Components

Page 52: Design Exploration Users Guide

• Using the Objective/Constraint History Chart (p. 170)

• Using the Input Parameter History Chart (p. 171)

Candidate Points Results

The Candidate Points Results object is comprised of both the Chart and Table views to display candidatepoints and data for one or more selected parameters. The Chart view shows a color-coded legend thatallows you to interpret the samples, candidate points identified by the optimization, candidates insertedmanually, and candidates for which output values have been verified by a design point update. In theTable view, output parameter values calculated from a simulation are displayed in black text, whileoutput parameter values calculated from a response surface are displayed in a custom color specifiedin the Options dialog. For details, see Response Surface Options (p. 23)

Select Candidate Points under the Results node in the Optimization tab Outline view.

Properties: (applied to results in the Table and Chart views)

• Display Parameter Relationships: Select to display parameter relationships in the candidate points table.

• Display Full Parameter Name: Select to display the full parameter name in the results.

• Show Candidates: Select to show candidates in the results.

• Coloring Method: Specify whether the results will be colored according to candidate type or source type.

• Show Samples: Select to show samples in the results.

• Show Starting Point: Select to show the starting point on the chart (NLPQL and MISQP only).

• Show Verified Candidates: Select to show verified candidates in the results (Response Surface Optimizationonly).

• Enable or disable the display of input and output parameters.

• Change various generic chart properties for the results in the Chart view.

Related Topics:

• Using the Candidate Points Results (p. 173)

• Understanding the Candidate Points Results Display (p. 173)

• Candidate Points Results: Properties (p. 174)

Tradeoff Chart

Allows you to view the Pareto fronts created from the samples generated in the GDO. Select the Tradeoff

chart cell under Charts in the Outline to display the chart in the Chart view. Use the Properties viewas follows:

Properties:

• Chart Mode: Set to 2D or 3D.

• Number of Pareto Fronts to Show: Set the number of Pareto fronts that are displayed on the chart.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.44

DesignXplorer Systems and Components

Page 53: Design Exploration Users Guide

• Show infeasible points: Enable or disable the display of infeasible points. (Available when constraintsare defined.)

• Click on a point on the chart to display a Parameters section that shows the values of the input andoutput parameters for that point.

• Set the parameters to be displayed on the X- and Y-axis.

• Change various generic chart properties.

Related Topics:

• Using the Tradeoff Chart (p. 176)

• Using Tradeoff Studies (p. 176)

Samples Chart

Allows you to visually explore a sample set given defined objectives. Select the Samples chart cell underCharts in the Outline to display the chart in the Chart view. Use the Properties view as follows:

Properties:

• Chart Mode: Set to Candidates or Pareto Fronts. For Pareto Fronts, the following options can be set:

– Number of Pareto Fronts to Show: Either enter the value or use the slider to select the number ofPareto Fronts displayed.

– Coloring Method: Can be set to per Samples or per Pareto Front.

• Show infeasible points: Enable or disable the display of infeasible points. (Available when constraintsare defined.)

• Click on a line on the chart to display the values of the input and output parameters for that line in theParameters section. Use the Enabled check box to enable or disable the display of parameter axes onthe chart.

• Change various generic chart properties.

Sensitivities Chart

Allows you to graphically view the global sensitivities of each output parameter with respect to theinput parameters. Select the Sensitivities chart cell under Charts in the Outline to display the chartin the Chart view. Use the Properties view as follows:

Properties:

• Chart Mode: Set to Bar or Pie.

• Enable or disable the parameters that are displayed on the chart.

• Change various generic chart properties.

Related Topics:

• Working with Sensitivities (p. 204)

45Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

DesignXplorer Components

Page 54: Design Exploration Users Guide

• Using the Sensitivities Chart (GDO) (p. 175)

Six Sigma Analysis Component Reference

The Six Sigma Analysis (SSA) system consists of a Design of Experiments cell, a Response Surface

cell, and a Six Sigma Analysis cell. The Design of Experiments cell allows you to set up the inputparameters and generate the samples for the analysis. The Response Surface that results will be thesame as the Response Surface from a standard Response Surface analysis. The Six Sigma Analysis cellallows you to set up the analysis type and view the results of the analysis.

By default, the Six Sigma Analysis DOE includes all of the input parameters and sets them to be uncer-tainty parameters. When you run a Six Sigma Analysis, you will need to set up the uncertainty parametersin the SSA DOE. If you want any input parameters to be treated as deterministic parameters, uncheckthe box next to those parameters in the SSA DOE Outline view.

Related Topics:

Design of Experiments (SSA)Six Sigma AnalysisSensitivities Chart (SSA)

Design of Experiments (SSA)

The Design of Experiments component in a Six Sigma Analsys system is the same as the Design of

Experiments component in other systems, with the exception of the input parameters being designatedas uncertainty parameters. You will need to set up the input parameter options before solving the DOE.

Related Topics:

• Design of Experiments Component Reference (p. 29)

The following views in the Design of Experiments tab contain components that are unique to the SixSigma Analysis Design of Experiments:

Outline: Allows you to:

• Select the input parameters and change their distribution properties.

• View the Skewness and Kurtosis properties for each input parameter distribution.

• View the calculated Mean and Standard Deviation for all Distribution Types except Normal and TruncatedNormal where you can set those values.

• View the initial Value for each input parameter.

• View the Lower Bound and Upper Bound (bounds used to generate the Design of Experiments) for eachparameter.

Properties: (The properties for the uncertainty input parameters are described here.)

• Distribution Type: Type of distribution associated with the input parameter

– Uniform

– Triangular

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.46

DesignXplorer Systems and Components

Page 55: Design Exploration Users Guide

– Normal

– Truncated Normal

– Lognormal

– Exponential

– Beta

– Weibull

• Maximum Likely Value (Triangular Only)

• Log Mean (Lognormal only)

• Log Standard Deviation (Lognormal only)

• Exponential Decay Parameter (Exponential only)

• Beta Shape R (Beta only)

• Beta Shape T (Beta only)

• Weibull Exponent (Weibull only)

• Weibull Characteristic Value (Weibull only)

• Non-Truncated Normal Mean (Truncated Normal only)

• Non-Truncated Normal Standard Deviation (Truncated Normal only)

• Mean (Normal only)

• Standard Deviation (Normal only)

• Distribution Lower Bound (Can be set for all but Normal and Lognormal)

• Distribution Upper Bound (Uniform, Triangular, Truncated Normal, and Beta only)

Table: When output parameters or charts are selected in the Outline view, displays the normal DOEdata grid of the design points that populates automatically during the solving of the points. When aninput parameter is selected in the Outline, displays a table that shows for each of the samples in theset:

• Quantile: The input parameter value point for the given PDF and CDF values.

• PDF: (Probability Density Function) density of the input parameter along the X-axis.

• CDF: (Cumulative Distribution Function) is the integration of PDF along the X-axis.

Chart: When an input parameter is selected in the Outline view, displays the Probability Density

Function and the Cumulative Distribution Function for the Distribution Type chosen for the inputparameter. There are also Parameters Parallel and Design Points vs Parameter charts as you would havein a standard DOE.

47Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

DesignXplorer Components

Page 56: Design Exploration Users Guide

Six Sigma Analysis

Once you have assigned the distribution functions to the input parameters in the DOE, you can updatethe project to perform the Six Sigma Analysis. Once the analysis is finished, edit the Six Sigma Analysis

cell in the project to see the results.

Related Topics:

• "Using Six Sigma Analysis" (p. 181)

• Statistical Measures (p. 182)

The following views in the Six Sigma Analysis tab to allow you to customize your analysis and viewthe results:

Outline: Allows you to:

• Select each input parameter and view its distribution properties, statistics, upper and lower bounds, initialvalue, and distribution chart.

• Select each output parameter and view its calculated maximum and minimum values, statistics and distri-bution chart.

• Set the table display format for each parameter to Quantile-Percentile or Percentile-Quantile .

Properties:

For the Six Sigma Analysis, the following properties can be set:

• Sampling Type: Type of sampling used for the Six Sigma Analysis

– LHS

– WLHS

• Number of Samples

For the parameters, the following properties can be set:

• Probability Table: Manner in which the analysis information is displayed in the Table view.

– Quantile-Percentile

– Percentile-Quantile

Table: Displays a table that shows for each of the samples in the set:

• <Parameter Name>: a valid value in the parameter's range.

• Probability: Probability that the parameter will be less than or equal to the parameter value shown.

• Sigma Level: The approximate number of standard deviations away from the sample mean for the givensample value.

If Probability Table is set to Quantile-Percentile in the Statistics section of the Properties view, youcan edit a parameter value and see the corresponding Probability and Sigma Level values. If it is set

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.48

DesignXplorer Systems and Components

Page 57: Design Exploration Users Guide

to Percentile-Quantile, the columns are reversed and you can enter a Probability or Sigma Level

value and see the corresponding changes in the other columns.

Chart: When a parameter is selected in the Outline view, displays its Probability Density Function

and the Cumulative Distribution Function. A global Sensitivities chart is available in the outline.

Sensitivities Chart (SSA)

Allows you to graphically view the global sensitivities of each output parameter with respect to theinput parameters in the Six Sigma Analysis. Select the Sensitivities chart cell under Charts in theOutline view to display the chart in the Chart view. Use the Properties view as follows:

Properties:

• Chart Mode: Set to Bar or Pie.

• Enable or disable the parameters that are displayed on the chart.

• Change various generic chart properties.

Related Topics:

• Working with Sensitivities (p. 204)

• Statistical Sensitivities in a Six Sigma Analysis (p. 270)

49Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

DesignXplorer Components

Page 58: Design Exploration Users Guide

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.50

Page 59: Design Exploration Users Guide

Using Parameters Correlation

The application of Goal Driven Optimization (GDO) and Six Sigma Analysis (SSA) in a finite elementbased analysis framework is always a challenge in terms of solving time, especially when the finite ele-ment model is large. For example, hundreds or thousands of finite element simulation runs in SSA isnot uncommon. If one simulation run takes hours to complete, it is almost impractical to perform SSAat all with thousands or even hundreds of simulations.

To perform GDO or SSA in finite element based analysis framework, it is always recommended to performa Design of Experiments (DOE) study. From the DOE study, a response surface is built within the designspace of interest. After a response surface is created, all simulation runs of GDO or SSA will then becomefunction evaluations.

In the DOE study, however, the sampling points increase dramatically as number of input parametersincreases. For example, a total of 149 sampling points (finite element evaluations) are needed for 10input variables using Central Composite Design with fractional factorial scheme. As the number of inputvariables increases, the analysis becomes more and more intractable. In this case, one would like toexclude unimportant input parameters from the DOE sampling in order to reduce unnecessary samplingpoints. A correlation matrix is a tool to help users identify input parameters deemed to be unimportant,and thus treated as deterministic parameters in SSA.

When to Use Parameters Correlation

As you add more input parameters to your Design of Experiments (DOE), the increase in the numberof design points can decrease the efficiency of the analysis process. In this case, you may wish to focuson the most important inputs, while excluding inputs with a lesser impact on your intended design.Removing these less important parameters from the DOE reduces the generation of unnecessary samplingpoints.

Benefits of Parameters Correlation

A Parameters Correlation study allows you to:

• Determine which input parameters have the most (and the least) impact on your design.

• Identify the degree to which the relationship is linear/quadratic.

It also provides a variety of charts to assist in your assessment of parametric impacts. For more inform-ation, see Parameters Correlation Charts (p. 55).

This section covers the following topics:Sample GenerationRunning a Parameters CorrelationViewing the Quadratic Correlation InformationDetermining SignificanceViewing Significance and Correlation ValuesParameters Correlation Charts

51Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 60: Design Exploration Users Guide

Sample Generation

The importance of an input to an output is determined from their correlation. The samples used forcorrelation calculation are generated with the Latin Hypercube Sampling (LHS) method. The LatinHypercube samples are generated in such a way that the correlations among the input parameters areless than or equal to 5%. Also, the Latin Hypercube samples are generated in such a way that eachsample is randomly generated, but no two points share input parameters of the same value.

The Optimal Space-Filling Design (OSF) method of sample generation is an LHS design that is extendedwith postprocessing to achieve uniform space distribution of points, maximizing the distance betweenpoints. The image below illustrates how samples generated via the LHS method vary in placement fromthose generated by the OSF sampling method.

The image below illustrates the generation of 20 samples via the Monte Carlo, LHS, and OSF methods.

Pearson’s Linear Correlation

• Uses actual data for correlation evaluation.

• Correlation coefficients are based on the sample values.

• Used to correlate linear relationships.

Spearman’s Rank Correlation

• Uses ranks of data.

• Correlation coefficients are based on the rank of samples.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.52

Using Parameters Correlation

Page 61: Design Exploration Users Guide

• Recognizes non-linear montonic relationships (which are less restrictive than linear ones). In a monotonicrelationship, one of the following two things happens:

– As the value of one variable increases, the value of the other variable increases as well.

– As the value of one variable increases, the value of the other variable decreases.

– Deemed the more accurate method.

Running a Parameters Correlation

To create a Parameters Correlation system in your Project Schematic, do the following:

1. With the Project Schematic displayed, drag a Parameters Correlation template from the design explor-ation area of the toolbox directly under the Parameter Set bar of the schematic.

2. After placing the Parameters Correlation system n the Project Schematic, double-click on the Parameters

Correlation cell to open the corresponding tab and set up the correlation options described in ParametersCorrelation Component Reference (p. 31).

3. Right-click the Parameters Correlation cell and select Update from the context menu.

4. When the update is finished, examine the results using the various charts shown in the Outline view.See Parameters Correlation Component Reference (p. 31) for more information about the available charts.

Auto Stop Type

If the Auto Stop Type property is set to:

• Execute All Simulations:

DesignXplorer updates the number of design points specified by the Number of Samples property.

• Enable Auto Stop:

The number of samples required to calculate the correlation is determined according to convergencesof the means and standard deviations of the output parameters. At each iteration, the Mean andStandard Deviation convergences are checked against the level of accuracy specified by the Mean

Value Accuracy and Standard Deviation Accuracy properties. See Correlation Convergence Pro-cess (p. 54).

DesignXplorer attempts to minimize the number of design points to be updated by monitoring theevolution of the output parameter values, and will stop calculating design points as soon as the levelof accuracy is met (i.e. the Mean and Standard Deviation are stable for all output parameters). If the

53Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Running a Parameters Correlation

Page 62: Design Exploration Users Guide

process has converged, DesignXplorer stops even if the number of calculated points specified by theNumber of Samples property has not been reached.

Correlation Convergence Process

Convergence of the correlation is determined as follows:

1. The convergence status is checked each time the number of points specified in the Convergence Fre-

quency Check property have been updated.

2. For each output parameter:

• The Mean and the Standard Deviation are calculated based on all the up-to-date design points availableat this step.

• The Mean is compared with the Mean at the previous step. It is considered to be stable if the differenceis smaller than 1% by default (Mean Value Accuracy = 0.01).

• The Standard Deviation is compared with the Standard Deviation at the previous step. It is consideredto be stable if the difference is smaller than 2% by default (Standard Deviation Accuracy = 0.02).

3. If the Mean and Standard Deviation are stable for all output parameters, the correlation is converged.

The convergence status is indicated by the value of the Converged property. When the process isconverged, the Converged property equals Yes and the possible remaining unsolved samples areautomatically removed. If the process has stopped because the Number of Samples is reached beforeconvergence, then the Converged property equals No.

Preview Parameters Correlation

The Preview operation is made available for the Parameters Correlation feature. As in other features, itallows the user to preview the list of samples to be calculated, depending on the selected options. Thepreview operation allows you to try various sampling options and see the list of samples to be generated,without actually running the samples (Update).

Monitoring and Interrupting the Update Operation

During a long update operation (direct solve Correlation context), you can monitor the progresseseasily by opening the Progress view and observing the table of samples which refreshes automaticallywhen results are returned to DesignXplorer.

By clicking on the Stop button in the Progress view, you can interrupt the update operation. Then, ifthere are enough samples calculated, partial correlation results are generated. You can review theseresults in the table view and by selecting a chart object in the outline.

To restart the update operation, right-click the Parameters Correlation cell and select Update fromthe context menu. If editing the component, you can also click the Update button on the Toolbar. Theprocess will restart where it was interrupted.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.54

Using Parameters Correlation

Page 63: Design Exploration Users Guide

Viewing the Quadratic Correlation Information

Quadratic correlation between a parameter pair is proposed by its coefficient of determination (R2 ) ofquadratic regression.

Note

R2 is shown as R2 in the user interface and correlation charts.

The (R2) between all parameter pairs is displayed in the Determination Matrix. The closer R2 is to 1, thebetter the quadratic regression is. The Determination Matrix is not symmetric, unlike the CorrelationMatrix. The Determination Matrix is displayed in the Table view below the Correlation Matrix. Like theCorrelation Matrix, there is a Chart associated with the Determination Matrix where any parameter canbe disabled.

In addition, the Correlation Scatter chart displays both a quadratic trend line and the trend line equationfor the selected parameter pair.

Determining Significance

Significance of an input parameter to an output parameter is determined using a statistical hypothesistest. In the hypothesis test, a NULL hypothesis of insignificance is assumed, and is tested if it is statist-ically true according to significance level (or acceptable risk) set by the user. From the hypothesis test,a p-Value (probability value) is calculated, and is compared with the significance level. If the p-Value isgreater than the significance level, it is concluded that the NULL hypothesis is true, and that the inputparameter is insignificant to the output parameter, and vice versa. See the Six Sigma Analysis Theorysection for more information.

Viewing Significance and Correlation Values

If you select Sensitivities from the Outline view in the Six Sigma Analysis tab, you can review thesensitivities derived from the samples generated for the Parameters Correlation. The Parameters Correl-ation sensitivities are global sensitivities. In the Properties view for the Sensitivities chart, you canchoose the output parameters for which you want to review sensitivities, and the input parameters thatyou would like to evaluate for the output parameters.

The default setting for the Significance Level (found in the Design Exploration section of the Tools

> Options dialog) is 0.025. Parameters with a sensitivity value above this significance will be shownwith a flat line on the Sensitivity chart, and the value displayed for those parameters when you mouseover them on the chart will be 0. In order to view the actual correlation value of the insignificant para-meter pair, you can either select the Correlation Matrix from the Outline and mouse over the squarefor that pair in the matrix, or you can set the Significance Level to 1, which bypasses the significancetest and displays all input parameters on the Sensitivities chart with their actual correlation values.

Parameters Correlation Charts

Parameters Correlation provides five different charts that allow you to assess parametric impacts in yourproject: Correlation Matrix, Determination Matrix, Correlation Scatter, Sensitivities, and DeterminationHistogram.

55Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Parameters Correlation Charts

Page 64: Design Exploration Users Guide

When the Parameters Correlation is updated, one instance of each chart is added to the project. In theParameters Correlation tab, you can view each chart by clicking on it under the Charts node of theOutline view.

To add a new instance of a chart, double-click it in the Toolbox. The chart will be added to the bottomof the Charts list.

To add a Correlation Scatter chart for a particular parameter combination, you can also right-click theassociated cell in the Correlation Matrix chart and select Insert <x-axis> vs <y-axis> Correlation

Scatter.

Related Topics:

Using the Correlation Matrix ChartUsing the Correlation Scatter ChartUsing the Determination Matrix ChartUsing the Determination Histogram ChartUsing the Sensitivities Chart

Using the Correlation Matrix Chart

The Correlation Matrix chart is a visual rendering of information in the Correlation Matrix table. TheCorrelation Coefficient indicates if there is a relationship between two variables and indicateswhether the relationship is a positive or negative number.

Color-coding of the cells indicates the strength of the correlation (the R2). The R2 value is displayed

when you hover your mouse over a cell. The closer the R2 value is to 1, the stronger the relationship.

In the Correlation Matrix below, we can see that input parameter P5–LENGTH is a major input becauseit drives all the outputs.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.56

Using Parameters Correlation

Page 65: Design Exploration Users Guide

On the other hand, we can see that input parameter P13–PIPE_Thicknessis not important to the studybecause it has little impact on the outputs. In this case, you might want to disable P13–PIPE_Thickness

by deselecting the Enabled check box in the Properties view. When the input is disabled, the chartchanges accordingly.

To disable parameters, you can also right-click on a cell corresponding to that parameter and select anoption from the context menu. You can disable the selected input, disable the selected output, disableall other inputs, or disable all other outputs.

To generate a Correlation Scatter chart for a given parameter combination, right-click on the correspond-ing cell in the Correlation Matrix chart and select Insert <x-axis> vs <y-axis> Correlation Scatter.

You can also select the Export Data context option to export the correlation matrix data to a CSV file.

Using the Correlation Scatter Chart

The Correlation Scatter chart allows you to plot linear and quadratic trend lines for the samples and

extract the linear and quadratic Coefficient of Determination (R2). In other words, it conveys the degreeof quadratic correlation between two parameters pair via a graphical presentation of linear and quad-ratic trends.

Note

R2 is shown as R2 in the user interface and correlation charts.

57Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Parameters Correlation Charts

Page 66: Design Exploration Users Guide

You can create a Correlation Scatter chart for a given parameter combination by right-clicking the cor-responding cell in the Correlation Matrix chart and selecting Insert <x-axis> vs <y-axis> Correlation

Scatter from the context menu.

In this example, in the Trend Lines section of the Properties view, you can see the equations for both

the Linear and Quadratic values for R2.

Since both the Linear and Quadratic properties are enabled in this example:

• The equations for the linear and quadratic trend lines are shown in the chart legend.

• The linear and quadratic trend lines are each represented by a separate line on the chart. The closerthe samples lie to the curve, the closer the Coefficient of Determination will be to the optimum valueof 1.

When you export the Correlation Scatter chart data to a CSV file or generate a report, the trend lineequations are included in the export and are shown in the CSV file or Project Report.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.58

Using Parameters Correlation

Page 67: Design Exploration Users Guide

Using the Determination Matrix Chart

The Determination Matrix chart is a visual rendering of then nonlinear (quadratic) information in the

Determination Matrix table. It shows the Coefficient of Determination (R2) between all parameter pairs.Unlike the Correlation Matrix, however, the Determination Matrix chart is not symmetric.

Color-coding of the cells indicates the strength of the correlation (the R2). The R2 value is displayed

when you hover your mouse over a cell. The closer the R2 value is to 1, the stronger the relationship.

In the Determination Matrix below, we can see that input parameter P5–Tensile Yield Strength is amajor input because it drives all the outputs.

You may want to disable inputs that have little impact on the outputs. To disable parameters in thechart:

• Deselect the Enabled check box in the Properties view.

• Right-click on a cell corresponding to that parameter and select an option from the context menu. Youcan disable the selected input, disable the selected output, disable all other inputs, or disable all otheroutputs.

You can also select the Export Data context option to export the correlation matrix data to a CSV file.For more information, see Extended CSV File Format (p. 281).

59Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Parameters Correlation Charts

Page 68: Design Exploration Users Guide

Using the Determination Histogram Chart

The Determination Histogram chart allows you to see what inputs drive a selected output parameter.You can set the Determination Type property to Linear or Quadratic. The Threshold R2 (%) propertyallows you to filter the input parameters by hiding the input parameters with a determination coefficientlower than the given threshold.

When you view a Determination Histogram chart, you should also check the Full Model R2 (%) valueto see how well output variations are explained by input variations. This value represents the variabilityof the output parameter that can be explained by a linear or quadratic correlation between the inputparameters and the output parameter. The closer this value is to 100%, the more certain it is that outputvariations result from the inputs. The lower the value, the more likely it is that other factors such asnoise, mesh error, or an insufficient number of points may be causing the output variations.

In the image below, you can see that input parameters P3–LENGTH, P2–HEIGHT, and P4–FORCE allaffect output P8–DISPLACEMENT. You can also see that of the three inputs, P3–LENGTH has by farthe greatest impact.

In the example below, you can see that the value for a linear determination is 96.2%.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.60

Using Parameters Correlation

Page 69: Design Exploration Users Guide

To view the chart for a quadratic determination, in the Properties view, set Determination Type toQuadratic. In the example below, we can see that with a quadratic determination type, input P5–YOUNG

is shown to also have a slight impact on P8–DISPLACEMENT. (You can filter your inputs to keep onlythe most important parameters, enabling or disabling them with the check boxes in the Outline view.)

In this example, we can also see that the Full Model R2 (%) value is improved slightly, now raised to97.436%.

Using the Sensitivities Chart

The Sensitivities chart shows global sensitivities of the output parameters with respect to the inputparameters. Positive sensitivity occurs when increasing the input increases the output. Negative sensit-ivity occurs when increasing the input decreases the output.

The Sensitivities chart can be displayed in either Bar or Pie mode. The chart below is displayed in Bar

mode.

Generally, the impact of an input parameter on an output parameter is driven by the following twothings:

• The amount by which the output parameter varies across the variation range of an input parameter.

61Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Parameters Correlation Charts

Page 70: Design Exploration Users Guide

• The variation range of an input parameter. Typically, the wider the variation range is, the larger the impactof the input parameter will be.

The statistical sensitivities are based on the Spearman-Rank Order Correlation coefficients that simultan-eously take both aspects into account.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.62

Using Parameters Correlation

Page 71: Design Exploration Users Guide

Using Design of Experiments

Design of Experiments (DOE) is a technique used to scientifically determine the location of samplingpoints and is included as part of the Response Surface, Goal Driven Optimization, and Analysis systems.There are a wide range of DOE algorithms or methods available in engineering literature. These tech-niques all have one common characteristic: they try to locate the sampling points such that the spaceof random input parameters is explored in the most efficient way, or obtain the required informationwith a minimum of sampling points. Sample points in efficient locations will not only reduce the requirednumber of sampling points, but also increase the accuracy of the response surface that is derived fromthe results of the sampling points. By default, the deterministic method uses a Central Composite Design,which combines one center point, points along the axis of the input parameters, and the points determ-ined by a fractional factorial design.

Once you have set up your input parameters, you can update the DOE, which submits the generateddesign points to the analysis system for solution. Design points are solved simultaneously if the analysissystem is set up to do so; sequentially, if not. After the solution is complete, you can update the Response

Surface cell, which generates response surfaces for each output parameter based on the data in thegenerated design points.

Note

Requirements and recommendations regarding the number of input parameters varyaccording to DOE type. For more information, see Number of Input Parameters for DOETypes (p. 69).

If you change the Design of Experiments type after doing an initial analysis and preview the Design of

Experiments Table, any design points generated for the new algorithm that are the same as designpoints solved for a previous algorithm will appear as up-to-date. Only the design points that are differentfrom any previously submitted design points need to be solved.

You should set up your DOE Properties before generating your DOE Design Point matrix. The followingtopics describe setting up and solving your Design of Experiments.

Setting Up the Design of ExperimentsDesign of Experiments TypesNumber of Input Parameters for DOE TypesComparison of LHS and OSF DOE TypesUsing a Central Composite Design DOEUpper and Lower Locations of DOE PointsDOE Matrix GenerationImporting and Copying Design Points

Setting Up the Design of Experiments

To set up your Design of Experiments (DOE):

63Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 72: Design Exploration Users Guide

1. Open the Design of Experiments tab by double-clicking on the Design of Experiments cell in theProject Schematic. Alternatively, you can right-click the cell and select Edit from the context menu.

2. On the Design of Experiments tab Outline view, select the Design of Experiments node.

3. In the Design of Experiments section of the Properties view, select a Design of Experiments Type.For details, see Design of Experiments Types (p. 64).

4. Specify additional properties for the DOE. (Available properties are determined by the Design of Exper-

iments Type selected.)

5. Click the Update toolbar button.

Design of Experiments Types

The DOE types available are:Central Composite Design (CCD)Optimal Space-Filling Design (OSF)Box-Behnken DesignCustomCustom + SamplingSparse Grid InitializationLatin Hypercube Sampling Design (LHS)

Central Composite Design (CCD)

Central Composite Design (CCD) is the default DOE type. It provides a screening set to determine theoverall trends of the meta-model to better guide the choice of options in Optimal Space-Filling Design.The CCD DOE type supports a maximum of 20 input parameters. For more information, see Number ofInput Parameters for DOE Types (p. 69).

The following properties are available for the CCD DOE type.

• Design Type: By specifying the Design Type for CCD, you can help to improve the response surface fitfor DOE studies. For each CCD type, the alpha value is defined as the location of the sampling point thataccounts for all quadratic main effects. The following CCD design types are available:

– Face-Centered: A three-level design with no rotatability. The alpha value equals 1.0. A Template Type

setting automatically appears, with Standard and Enhanced options. Choose Enhanced for a possiblebetter fit for the response surfaces.

– Rotatable: A five-level design that includes rotatability. The alpha value is calculated based on thenumber of input variables and a fraction of the factorial part. A design with rotatability has the samevariance of the fitted value regardless of the direction from the center point.

– VIF-Optimality: A five-level design in which the alpha value is calculated by minimizing a measure ofnon-orthogonality known as the Variance Inflation Factor (VIF). The more highly correlated the inputvariable with one or more terms in a regression model, the higher the Variance Inflation Factor.

– G-Optimality: Minimizes a measure of the expected error in a prediction and minimizes the largestexpected variance of prediction over the region of interest.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.64

Using Design of Experiments

Page 73: Design Exploration Users Guide

– Auto-Defined: Design exploration automatically selects the Design Type based on the number of inputvariables. Use of this option is recommended for most cases as it automatically switches between theG-Optimal (if the number of input variables is 5) or VIF-optimal otherwise.

However, the Rotatable design may be used if the default option does not provide good valuesfor the Goodness of Fit from the response surface plots. Also, the Enhanced template may be usedif the default Standard template does not fit the response surfaces too well.

• Template Type: Enabled for the Rotatable and Face-Centered design types. The following options areavailable:

– Standard

– Enhanced: Choose this option for a possible better fit for the response surfaces.

For more information, see Using a Central Composite Design DOE (p. 71).

Optimal Space-Filling Design (OSF)

Optimal Space-Filling Design (OSF) creates optimal space filling Design of Experiments (DOE) plansaccording to some specified criteria. Essentially, OSF is a Latin Hypercube Sampling Design (LHS) thatis extended with post-processing. It is initialized as an LHS and then optimized several times, remaininga valid LHS (without points sharing rows or columns) while achieving a more uniform space distributionof points (maximizing the distance between points).

To offset the noise associated with physical experimentation, classical DOE types such as CCD focus onparameter settings near the perimeter of the design region. Because computer simulation is not quiteas subject to noise, though, the Optimal Space-Filling (OSF) design is able to distribute the designparameters equally throughout the design space with the objective of gaining the maximum insightinto the design with the fewest number of points. This advantage makes it appropriate when a morecomplex meta-modeling technique such as Kriging, Non-Parametric Regression or Neural Networks isused.

OSF shares some of the same disadvantages as LHS, though to a lesser degree. Possible disadvantagesof an OSF design are that extremes (i.e., the corners of the design space) are not necessarily coveredand that the selection of too few design points can result in a lower quality of response prediction.

Note

The OSF DOE type is a Latin Hypercube Sampling design that is extended with post-processing.For a comparison of the two, see Comparison of LHS and OSF DOE Types (p. 70).

The following properties are available for the OSF DOE type.

• Design Type: The following choices are available:

– Max-Min Distance (default): Maximizes the minimum distance between any two points. This strategyensures that no two points are too close to each other. For a small size of sampling (N), the Max-MinDistance design will generally lie on the exterior of the design space and fill in the interior as N becomeslarger. Generally the faster algorithm.

– Centered L2: Minimizes the centered L2-discrepancy measure. The discrepancy measure correspondsto the difference between the empirical distribution of the sampling points and the uniform distribution.

65Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Design of Experiments Types

Page 74: Design Exploration Users Guide

In other words, the centered L2 yields an uniform sampling. Computationally faster than MaximumEntropy.

– Maximum Entropy: Maximizes the determinant of the covariance matrix of the sampling points tominimize uncertainty in unobserved locations. This option often provides better results for highly cor-related design spaces. However, its cost increases non-linearly with the number of input parametersand the number of samples to be generated. Thus, it is recommended only for small parametric problems.

• Maximum Number of Cycles: Determines the number of optimization loops the algorithm needs, whichin turns determines the discrepancy of the DOE. The optimization is essentially combinatorial, so a largenumber of cycles will slow down the process; however, this will make the discrepancy of the DOE smaller.For practical purposes, 10 cycles usually is good for up to 20 variables. Must be greater than 0. The defaultis 10.

• Samples Type: Determines the number of DOE points the algorithm should generate. This option issuggested if you have some advanced knowledge about the nature of the meta-model. The followingchoices are available:

– CCD Samples (default): Generates the same number of samples a CCD DOE would generate for thesame number of inputs. You may want to use this to generate a space filling design which has the samecost as a corresponding CCD design.

– Linear Model Samples: Generates the number of samples as needed for a linear meta-model.

– Pure Quadratic Model Samples: Generates the number of samples as needed for a pure quadraticmeta-model (no cross terms).

– Full Quadratic Samples: Generates the number of samples needed to generate a full quadratic model.

– User-Defined Samples: Specify the desired number of samples.

• Seed Value: Set the value used to initialize the random number generator invoked internally by the LHSalgorithm. Although the generation of a starting point is random, the seed value consistently results in aspecific LHS. This property allows you to generate different samplings (by changing the value) or to regen-erate the same sampling (by keeping the same value). Defaults to 0.

• Number of Samples: Enabled when Samples Type is set to User-Defined Samples. Specifies the defaultnumber of samples. Defaults to 10.

Box-Behnken Design

A Box-Behnken Design is a three-level quadratic design that does not contain fractional factorial design.The sample combinations are treated in such a way that they are located at midpoints of edges formedby any two factors. The design is rotatable (or in cases, nearly rotatable).

One advantage of a Box-Behnken design is that it requires fewer design points than a full factorial CCDand generally requires fewer design points than a fractional factorial CCD. Additionally, a Box-BehnkenDesign avoids extremes, allowing you to work around extreme factor combinations. Consider using theBox-Behnken Design DOE type if your project has parametric extremes (for example, has extremeparameter values in corners that are difficult to build). Since the Box-Behnken DOE doesn’t have cornersand does not combine parametric extremes, it can reduce the risk of update failures. For details, seethe Box-Behnken Design (p. 215) Theory section.

Possible disadvantages of a Box-Behnken design are:

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.66

Using Design of Experiments

Page 75: Design Exploration Users Guide

• Prediction at the corners of the design space is poor and that there are only three levels per parameter.

• A maximum of 12 input parameters is supported. For more information, see Number of Input Para-meters for DOE Types (p. 69).

No additional properties are available for the Box-Behnken Design DOE type.

Custom

The Custom DOE type allows for definition of a custom DOE Table. You can manually add new designpoints, entering the input and (optionally) output parameter values directly into the table. If you previ-ously solved the DOE using one of the other algorithms, those design points will be retained and youcan add new design points to the table. You can also import and export design points into the customDOE Table from the Parameter Set.

You can change the edition mode of the DOE table in order to edit the output parameter values. Youcan also copy/paste data and import data from a csv file (right-click and select Import Design Points).

For more information, see Working with Tables (p. 205).

Note

• You can generate a DOE matrix using one of the other design options provided, then switchto Custom to modify the matrix. The opposite however is not possible; that is, if you define aCustom matrix first, then change to one of the other design options provided, the matrix iscleared.

• The table can contain derived parameters. Derived parameters are always calculated by thesystem, even if the table mode is All Output Values Editable.

• Editing output values for a row changes the Design of Experiments cell's state to Update

Required. The DOE will need to be updated, even though no calculations are done.

• The DOE charts do not reflect the points added manually using the Custom option until theDOE is Updated.

• It is expected that the Custom option will be used to enter DOE plans that were built externally.If you use this feature to enter all of the points in the matrix manually, you must make sure toenter enough points so that a good fitting can be created for the response surface. This is anadvanced feature that should be used with caution; always verify your results with a directsolve.

No additional properties are available for the Custom DOE type.

Custom + Sampling

The Custom + Sampling DOE type provides the same capabilities as the Custom DOE type and allowsyou to complete the DOE table automatically to fill the design space efficiently. For example, DOE tableinitialized with imported design points from a previous study, or your initial DOE (Central CompositeDesign, Optimal Space Filling Design or Custom) can be completed with new points. The generationof these new design points takes into account the coordinates of previous design points.

67Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Design of Experiments Types

Page 76: Design Exploration Users Guide

The following property is available for the Custom + Sampling DOE type:

• Total Number of Samples: Expects the user to enter the desired number of samples (the number of ex-isting design points are included in this count). You must enter a positive number. If the total number ofsamples is less than the number of existing points, no any new points will be added. If there are discreteinput parameters, the total number of samples corresponds to the number of points that should bereached for each combination of discrete parameters.

Sparse Grid Initialization

Sparse Grid Initialization is the DOE type required to run a Sparse Grid Interpolation. Sparse Grid isan adaptive meta-model driven by the accuracy that you request. It increases the accuracy of the responsesurface by automatically refining the matrix of design points in locations where the relative error of theoutput parameter is higher. This DOE type generates the levels 0 and 1 of the Clenshaw-Curtis Grid. Inother words, because the Sparse Grid algorithm is based on a hierarchy of grids, the Sparse Grid Initial-ization DOE type generates a DOE matrix containing all the design points for the smallest required grid:the level 0 (the point at the current values) plus the level 1 (two points per input parameters).

One advantage to a Sparse Grid design is that it refines only in the directions necessary, so that fewerdesign points are needed for the same quality response surface. Another is that Sparse Grid is effectiveat handling discontinuities. Although this DOE type is required to build a Sparse Grid response surface,it can also be used by other types of response surface.

No additional properties are available for the Sparse Grid Initialization DOE type.

Latin Hypercube Sampling Design (LHS)

In the Latin Hypercube Sampling Design DOE type, the DOE is generated by the LHS algorithm, anadvanced form of the Monte Carlo sampling method that avoids clustering samples. In a Latin HypercubeSampling, the points are randomly generated in a square grid across the design space, but no twopoints share the same value (i.e., so no point shares a row or a column of the grid with any other point).

Possible disadvantages of an LHS design are that extremes (i.e., the corners of the design space) arenot necessarily covered and that the selection of too few design points can result in a lower quality ofresponse prediction.

Note

The Optimal Space-Filling Design DOE type is an LHS design that is extended with post-processing. For a comparison of the two, see Comparison of LHS and OSF DOE Types (p. 70).

The following properties are available for the LHS DOE type:

• Samples Type: Determines the number of DOE points the algorithm should generate. This option issuggested if you have some advanced knowledge about the nature of the meta-model. The followingchoices are available:

– CCD Samples (default): Generates the same number of samples a CCD DOE would generate for thesame number of inputs. You may want to use this to generate a space filling design which has the samecost as a corresponding CCD design.

– Linear Model Samples: Generates the number of samples as needed for a linear meta-model.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.68

Using Design of Experiments

Page 77: Design Exploration Users Guide

– Pure Quadratic Model Samples: Generates the number of samples as needed for a pure quadraticmeta-model (no cross terms).

– Full Quadratic Samples: Generates the number of samples needed to generate a full quadratic model.

– User-Defined Samples: Specify the desired number of samples.

• Seed Value: Set the value used to initialize the random number generator invoked internally by the LHSalgorithm. Although the generation of a starting point is random, the seed value consistently results in aspecific LHS. This property allows you to generate different LHS samplings (by changing the value) or toregenerate the same LHS sampling (by keeping the same value). Defaults to 0.

• Number of Samples: Enabled when Samples Type is set to User-Defined Samples. Specifies the defaultnumber of samples. Defaults to 10.

Number of Input Parameters for DOE Types

The number of enabled input parameters has an impact on the generation of a DOE and responsesurface. The more input parameters that are enabled, the longer it takes to generate the DOE and updatecorresponding design points. Additionally, having a large number of input parameters makes it moredifficult to generate an accurate response surface. The quality of a response surface is dependent onthe relationships that exist between input and output parameters, so having fewer enabled inputsmakes it easier to determine how they impact the output parameters.

As such, the recommendation for most DOE types is to have as few enabled input parameters as possible(fewer than 20 would be ideal). The exceptions are the CCD and Box-Behnken algorithms; CCD has ahard limit at 20 and Box-Behnken has a hard limit at 12 parameters. The number of inputs should betaken into account when selecting a DOE type for your study (or when defining inputs, if you knowahead of time which DOE type you intend to use).

If you are using a DOE type other than CCD or Box-Behnken and more than the recommended maximumof 20 inputs are enabled, DesignXplorer shows an alert icon in the Message column of the Outline

view of the Design of Experiments tab, the Response Surface tab, and the Optimization tab for Ad-aptive Single-Objective (ASO) and Adaptive Multiple-Objective (AMO) optimizations.

The warning icon is displayed only when required component edits are completed. The number nextto the icon indicates the number of active warnings, and you can click on the icon to review thewarning messages. To remove the warning, disable inputs by deselecting them in the Enabled columnuntil 20 or fewer are still enabled. If you are unsure of which parameters to disable, you can use aParameters Correlation study to determine which inputs are least correlated with your results. For moreinformation, see "Using Parameters Correlation" (p. 51).

69Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Number of Input Parameters for DOE Types

Page 78: Design Exploration Users Guide

Besides the number of enabled inputs, however, there are other factors that can affect response surfacegeneration. For example:

• Meta-models (response surfaces) using automatic refinement will add additional design points toimprove the resolution of each output. The more outputs and the more complicated the relationshipbetween the inputs and outputs, the more design points that will be required.

• Increasing the number of output parameters increases the number of response surfaces that are re-quired.

• Discrete input parameters can be expensive because a response surface is generated for each discretecombination, as well as for each output parameter.

• A non-linear or non-polynomial relationship between input and output parameters will require moredesign points to build an accurate response surface, even with a small number enabled inputs.

Factors such as these can offset the importance of using a small number of enabled input parameters.If you expect that a response surface can be generated with relative ease—for example, the project haspolynomial relationships between the inputs and outputs, only continuous inputs, and a small numberof outputs—you may decide it’s worthwhile to exceed the recommended number of inputs. In thiscase, you can ignore the warning and proceed with your update.

Comparison of LHS and OSF DOE Types

The Latin Hypercube Design (LHS) and Optimal Space-Filling Design (OSF) methods both create Designof Experiments (DOE) plans according to a specified criteria.

• An LHS design is an advanced form of the Monte Carlo sampling method. In an LHS design, no pointshares a row or column of the design space with any other point.

• An OSF des ign is essentially an LHS design that is optimized through several iterations, maximizing thedistance between points to achieve a more uniform distribution across the design space. Because it aimsto gain the maximum insight into the design by using the fewest number of points, it is an effective DOEchoice for complex meta-modeling techniques that use relatively large numbers of design points.

Because OSF incorporates the LHS algorithm, both DOE types aim to conserve optimization resourcesby avoiding the creation of duplicate points. Given an adequate number of design points to work with,

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.70

Using Design of Experiments

Page 79: Design Exploration Users Guide

both methods result in a high quality of response prediction. The OSF algorithm, however, offers theadded benefit of fuller coverage of the design space.

For example, with a two-dimensional problem that has only two input parameters and uses only sixdesign points, it can be difficult to build an adequate response surface. This is especially true in thecase of LHS because of its nonuniform distribution of design points over the design space.

When the number of design points for the same scenario is increased to twenty, the quality of theresulting response surface is improved. The LHS method, however, can result in close, uneven groupingsof design points and so can skip parts of the design space. The OSF method, with its maximization ofthe distance between points and more uniform distribution of points, addresses extremes more effectivelyand provides far better coverage of the design space. For this reason, OSF is the recommended method.

Using a Central Composite Design DOE

In Central Composite Design (CCD), a Rotatable (spherical) design is preferred since the predictionvariance is the same for any two locations that are the same distance from the design center. However,there are other criteria to consider for an optimal design setup. Among these criteria, there are twocommonly considered in setting up an optimal design using the design matrix.

1. The degree of non-orthogonality of regression terms can inflate the variance of model coefficients.

2. The position of sample points in the design can be influential based on its position with respect toothers of the input variables in a subset of the entire set of observations.

71Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Using a Central Composite Design DOE

Page 80: Design Exploration Users Guide

An optimal CCD design should minimize both the degree of non-orthogonality of term coefficients andthe opportunity of sample points having abnormal influence. In minimizing the degree of non-ortho-gonality, the Variation Inflation Factor (VIF) of regression terms is used. For a VIF-Optimality design,the maximum VIF of the regression terms is to be minimized, and the minimum value is 1.0. In minim-izing the opportunity of influential sample points, the leverage value of each sample points is used.Leverages are the diagonal elements of the Hat matrix, which is a function of the design matrix. For aG-Optimality design, the maximum leverage value of sample points is to be minimized.

For a VIF-Optimality design, the alpha value/level is selected such that the maximum VIF is minimum.Likewise, for a G-Optimality design, the alpha value/level is selected such that the maximum leverageis minimum. The rotatable design is found to be a poor design in terms of VIF- and G-Efficiencies.

For an optimal CCD, the alpha value/level is selected such that both the maximum VIF and the maximumleverage are the minimum possible. For the Auto Defined design, the alpha value is selected fromeither the VIF- or G-Optimality design that meets the criteria. Since it is a multi-objective optimizationproblem, in many cases, there is no unique alpha value such that both criteria reach their minimum.However, the alpha value is evaluated such that one criterion reaches minimum while another approachesminimum.

For the current Auto Defined setup (except for a problem with five variables that uses G-Optimalitydesign) all other multi-variable problems use VIF-Optimality. In some cases, despite the fact that Auto

Defined provides an optimal alpha meeting the criteria, an Auto Defined design might not give asgood of a response surface as anticipated due to the nature of the physical data used for fitting in theregression process. In that case, you should try other design types that might give a better responsesurface approximation.

Note

Default values for CCD Types can be set in the Design Exploration > Design of Experiments

section of the Options dialog box, which is accessible from the Tools menu.

It is good practice to always verify some selected points on the response surface with an actual simulationevaluation to determine its validity of use for further analyses.. In some cases, a good response surfacedoes not mean a good representation of an underlying physics problem since the response surface isgenerated according to the predetermined sampling points in the design space, which sometimesmisses capturing an unexpected change in some regions of the design space. In that case, you shouldtry Extended DOE. In Extended DOE, a mini CCD is appended to a standard CCD design, where a secondalpha value is added, and is set to half the alpha value of the Standard CCD. The mini CCD is set up ina way that the essences of CCD design (rotatability and symmetry) are still maintained. The appendedmini CCD serves two purposes:

1. to capture a drastic change within the design space, if any.

2. to provide a better response surface fit.

The two purposes seem to be conflicting in some cases where the response surface might not be asgood as that of the Standard DOE due to the limitation of a quadratic response surface in capturing adrastic change within the design space.

The location of the generated design points for the deterministic method is based on a central compositedesign. If N is the number of input parameters, then a central composite design consists of:

• One center point.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.72

Using Design of Experiments

Page 81: Design Exploration Users Guide

• 2*N axis point located at the -α and +α position on each axis of the selected input parameters.

• 2(N-f) factorial points located at the -1 and +1 positions along the diagonals of the input parameter space.

The fraction f of the factorial design and the resulting number of design points are given in the followingtable:

Number of Generated Design Points as a Function of the Number of Input Parameters

Number of Design PointsFactorial Number fNumber of Input Parameters

501

902

1503

2504

2715

4516

7917

8128

14729

149310

151411

281412

283513

285614

287715

289816

291917

549918

5511019

5531120

Upper and Lower Locations of DOE Points

The upper and lower levels of the DOE points depend on whether the input variable is a design variable(optimization) or an uncertainty variable (Six Sigma Analysis). Generally, a response surface will be moreaccurate when closer to the DOE points. Therefore, the points should be close to the areas of the inputspace that are critical to the effects being examined.

For example, for Goal Driven Optimization, the DOE points should be located close to where the optimumdesign is determined to be. For a Six Sigma Analysis, the DOE points should be close to the area wherefailure is most likely to occur. In both cases, the location of the DOE points depends upon the outcomeof the analysis. Not having that knowledge at the start of the analysis, you can determine the locationof the points as follows:

• For a design variable, the upper and lower levels of the DOE range coincide with the bounds specifiedfor the input parameter.

73Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Upper and Lower Locations of DOE Points

Page 82: Design Exploration Users Guide

It often happens in optimization that the optimum point is at one end of the range specified for oneor more input parameters.

• For an uncertainty variable, the upper and lower levels of the DOE range are the quantile values corres-ponding to a probability of 0.1% and 99.9%, respectively.

This is the standard procedure whether the input parameter follows a bounded (e.g., uniform) orunbounded (e.g., Normal) distribution, because the probability that the input variable value will exactlycoincide with the upper or lower bound (for a bounded distribution) is exactly zero. That is, failurecan never occur when the value of the input variable is equal to the upper or lower bound. Failuretypically occurs in the tails of a distribution, so the DOE points should be located there, but not atthe very end of the distribution.

DOE Matrix Generation

The DOE matrix is generated when you Preview or Update your DOE. This matrix consists of generateddesign points that are submitted to the parent analysis systems for solution. Output parameter valuesare not available until after the Update. You can monitor the progress of the design generation byclicking the Show Progress button in the lower right corner of the window when the DOE matrixgeneration is prolonged; for example, Optimal Space Filling or certain VIF or G-optimal designs.

Note

The design points will be solved simultaneously if the analysis system is configured to performsimultaneous solutions; otherwise they will be solved sequentially.

To clear the design points generated for the DOE matrix, return to the Project Schematic, right-clickon the Design of Experiments cell, and select Clear Generated Data. You can clear data from anydesign exploration cell in the Project Schematic in this way, and regenerate your solution for that cellwith changes to the parameters if you so desire.

Importing and Copying Design Points

If you are using a “Custom” or “Custom+Sampling” Design of Experiments, you can import design pointvalues from an external CSV file or copy existing design points from the Parameter Set cell.

The Import Design Points operation is available in the contextual menu when you right-click:

• a Design of Experiments component from the Project Schematic view,

• the Design of Experiments node from the Outline view,

• on the table of design points, in the Table view of the Design of Experiments tab.

Note

These operations are only available if the type of Design of Experiments is Custom or Custom

+ Sampling.

See “Importing Data from a CSV file” for more information.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.74

Using Design of Experiments

Page 83: Design Exploration Users Guide

Copy all Design Points from the Parameter Set into a DOE Table

1. From the Project Schematic, double click on the Design of Experiments cell where you want to copythe design points.

2. If needed, click on the Design of Experiments object in the Outline and set the Design of ExperimentsType to Custom or Custom + Sampling.

3. Right-click the Design of Experiments object in the Outline, or right-click in the Table view, and selectCopy all Design Points from the Parameter Set.

All design points from the Parameter Set will be copied to the Design of Experiments table if the valuesof the parameters in the design points fall within the defined ranges of the parameters in the DOE. Ifsome of the parameter values in the design points are out of range, you will be given the option toautomatically expand the parameter ranges in the DOE to accommodate the out of range values. If youchoose not to expand the parameter ranges, only the design points where all parameters fall withinthe existing ranges will be copied into the DOE Table. You may also cancel the import action.

Copy Selected Design Points from the Parameter Set

1. From the Project Schematic, double click on the Parameter Set.

2. Select one or more design points from the Table of Design Points.

3. Right-click and select Copy Design Points to: and then select one of the DOE cells listed in the submenu.Only DOE cells with DOE Type of Custom or Custom + Sampling will be listed.

All selected design points will be copied from the Parameter Set to the selected Design of Experimentstable following the same procedure as above.

Note

If an unsolved design point was previously copied to a custom DOE, and subsequently thisdesign point is solved in the Parameter Set, you can copy it to the custom DOE again topush its output values to the Custom DOE Table.

75Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Importing and Copying Design Points

Page 84: Design Exploration Users Guide

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.76

Page 85: Design Exploration Users Guide

Using Response Surfaces

The Response Surfaces are functions of different nature where the output parameters are described interms of the input parameters. They are built from the Design of Experiments in order to provide quicklythe approximated values of the output parameters, everywhere in the analyzed design space, withouthaving to perform a complete solution.

The accuracy of a response surface depends on several factors: complexity of the variations of thesolution, number of points in the original Design of Experiments and choice of the response surfacetype. ANSYS DesignXplorer provides tools to estimate and improve the quality of the response surfaces.

Once response surfaces are built, you can create and manage response points and charts. These post-processing tools allow exploring the design and understanding how each output parameter is drivenby input parameters and how the design can be modified to improve its performances.

This section contains information about using the Response Surface:

• Meta-Model Types (p. 77)

• Meta-Model Refinement (p. 89)

• Goodness of Fit (p. 92)

• Min-Max Search (p. 98)

• Response Surface Charts (p. 99)

Meta-Model Types

Several different meta-modeling algorithms are available to create the response surface:

• Standard Response Surface - Full 2nd-Order Polynomial (p. 77) (Default)

• Kriging (p. 78)

• Non-Parametric Regression (p. 83)

• Neural Network (p. 83)

• Sparse Grid (p. 84)

Standard Response Surface - Full 2nd-Order Polynomial

This is the default.

Regression analysis is a statistical methodology that utilizes the relationship between two or morequantitative variables so that one dependent variable can be estimated from the others.

77Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 86: Design Exploration Users Guide

A regression analysis assumes that there are a total of n sampling points and for each sampling pointthe corresponding values of the output parameters are known. Then the regression analysis determinesthe relationship between the input parameters and the output parameter based on these sample points.This relationship also depends on the chosen regression model. Typically for the regression model, asecond-order polynomial is preferred. In general, this regression model is an approximation of the trueinput-to-output relationship and only in special cases does it yield a true and exact relationship. Oncethis relationship is determined, the resulting approximation of the output parameter as a function ofthe input variables is called the response surface.

Forward-Stepwise-Regression

In the forward-stepwise-regression, the individual regression terms are iteratively added to the regressionmodel if they are found to cause a significant improvement of the regression results. For DOE, a partialF test is used to determine the significance of the individual regression terms.

Improving the Regression Model with Transformation Functions

Only in special cases can output parameters of a finite element analysis, such as displacements orstresses, be exactly described by a second-order polynomial as a function of the input parameters.Usually a second order polynomial provides only an approximation. The quality of the approximationcan be significantly improved by applying a transformation function. By default, the Yeo-Johnsontransformation is used.

If the Goodness of Fit of the response surface is not as good as expected for an output parameter, it ispossible to select a different Transformation Type. The Yeo-Johnson transformation is more numericallystable in its back-transformation. On the contrary, the Box-Cox transformation is more numerically un-stable in its back-transformation, but it gives better fit in some cases. Selecting Transformation TypeNone means that the standard response surface regression is computed without any transformations.

Kriging

Kriging is a meta-modeling algorithm that provides an improved response quality and fits higher ordervariations of the output parameter. It is an accurate multidimensional interpolation combining a poly-nomial model similar to the one of the standard response surface—which provides a “global” modelof the design space—plus local deviations so that the Kriging model interpolates the DOE points. TheKriging meta-model provides refinement capabilities for continuous input parameters, including thosewith Manufacturable Values (not supported for discrete parameters). The effectiveness of the Krigingalgorithm is based on the ability of its internal error estimator to improve response surface quality bygenerating refinement points and adding them to the areas of the response surface most in need ofimprovement.

In addition to manual refinement capabilities, the Kriging meta-model offers an auto-refinement optionwhich automatically and iteratively updates the refinement points during the update of the ResponseSurface. At each iteration of the refinement, the Kriging meta-model evaluates a Predicted Relative Errorin the full parameter space. (DesignXplorer uses Predicted Relative Error instead of Predicted Error becausethis allows the same values to be used for all output parameters, even when the parameters have dif-ferent ranges of variation.) At this step in the process, the Predicted Relative Error for one outputparameter is the Predicted Error of the output parameter normalized by the known Maximum Variationof the output parameter, as follows:

PredictedRelativeErrorPredictedRelativeError

Omax Omin=

×

(( )

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.78

Using Response Surfaces

Page 87: Design Exploration Users Guide

Where Omax and Omin are the maximum and minimum known values (on design points) of the outputparameter.

For guidelines on when to use the Kriging algorithm, see Changing the Meta-Model (p. 90).

Note

• Beginning with ANSYS release 13.0, you do not need to do an initial solve of the responsesurface (without refinement points) before an auto-refinement.

• Beginning with ANSYS release 14.0, the Interactive refinement type is no longer available forthe Kriging meta-model. If you have an ANSYS 13.0 project with the Kriging Refinement Type

set to either Interactive or Auto, the refinement type will automatically be reset to Manual

as part of the ANSYS release 14.0 migration process. You can leave the refinement type set toManual or can change it to Auto after the migration.

How Kriging Auto-Refinement Works

Auto-refinement enables you to refine the Kriging response surface automatically by choosing certainrefinement criteria and allowing an automated optimization-type procedure to add more points to thedesign domain where they are most needed.

The prediction of error is a continuous and differentiable function. In order to find the best candidaterefinement point, the refinement process determines the maximum of the prediction function by runninga gradient-based optimization procedure. If the prediction of the accuracy for the new candidate refine-ment point exceeds the required accuracy, the point is then promoted as a new refinement point.

The auto-refinement process continues iteratively, locating and adding new refinement points untileither the refinement has converged (i.e., the Response Surface is accurate enough for direct outputparameters) or the maximum allowable number of refinement points has been generated.

For more detailed information, see Kriging Algorithms (p. 217) in the DesignXplorer Theory chapter.

Using Kriging Auto-Refinement

When using the Kriging auto-refinement capabilities, there are four main steps to the process: settingup the response surface, setting up the output parameters, updating the response surface, and gener-ating verification points. Each step is addressed below.

Setting Up the Response Surface

To set up a Response Surface for Kriging auto-refinement:

1. In the Outline view for the Response Surface, select the Response Surface cell.

2. In the Meta Model section of the Properties view, set the Response Surface Type to Kriging.

3. In the Refinement section of the Properties view:

• Set the Refinement Type to Auto.

• Enter the Maximum Number of Refinement Points. This is the maximum number of refinementpoints that can be generated.

79Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Meta-Model Types

Page 88: Design Exploration Users Guide

• Enter the Maximum Predicted Relative Error (%). This is the maximum predicted relative error thatis acceptable for all parameters.

4. In the Refinement Advanced Options section of the Properties view:

• Select the Output Variable Combinations. This determines how output variables will be consideredin terms of predicted relative error and controls the number of refinement points that will be createdper iteration.

• Enter the Maximum Crowding Distance Separation Percentage. This determines the minimum al-lowable distance between new refinement points.

5. In the Verification Points section of the Properties view:

• If you wish to generate verification points, select the Generate verification points check box.

• If you selected the Generate Verification Points check box, the Number of Verification Points fielddisplays with the default value of 1. Enter a value if you want a different number of verification pointsto be generated.

For detailed descriptions of Response Surface fields in the Properties view, see Kriging Auto-RefinementProperties.

Setting up the Output Parameters

1. In the Outline view for the Response Surface, select an output parameter.

2. In the Refinement section of the Properties view for the selected output parameter, select or deselectthe Inherit From Model Settingss check box. This determines whether the Maximum Predicted RelativeError defined at the Model level will be applicable to this parameter.

3. If the Inherit From Model Settings check box is deselected, enter the Maximum Predicted Relative

Error (i.e., enter the maximum predicted relative error that you will accept) for the output parameter.This can be different than the Maximum Predicted Relative Error defined at the Model level.

For detailed descriptions of output properties in the Properties view, see Kriging Auto-RefinementProperties.

Updating the Response Surface

Once you’ve defined the Response Surface and output parameter properties, complete the refinementby right-clicking the Response Surface node and selecting Update from the context menu. Alternatively,in the component editing view, you can click the Update button on the Toolbar.

The generated points for the refinement will appear in the Response Surface Table view under Refine-

ment Points. As the refinement points are updated, the Convergence Curves chart updates dynamically,allowing you to monitor the progress of the Kriging auto-refinement. See Kriging Convergence CurvesChart for more information.

The auto-refinement process continues until either the maximum number of refinement points is reachedor the Response Surface is accurate enough for direct output parameters. If all output parameters havea Predicted Relative Error that is less than the Maximum Predicted Relative Error defined for them inthe Properties view, the refinement is converged.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.80

Using Response Surfaces

Page 89: Design Exploration Users Guide

Generating Verification Points

Once the auto-refinement has converged (and particularly at the first iteration without refinementpoints), it is recommended that you generate verification points to validate the Response Surface accuracy.

As with the Sparse Grid response surface, Kriging is an interpolation. Goodness of Fit is not a reliablemeasure for Kriging because the Response Surface passes through all of the design points, making theGoodness of Fit appear to be perfect. As such, the generation of verification points is essential for as-sessing the quality of the Response Surface and understanding the actual Goodness of Fit.

If the accuracy of the verification points is larger than the Predicted Relative Error given by Kriging, youcan insert the verification points as refinement points (this must be done in manual refinement mode)and then run a new auto-refinement so that the new points will be included in the generation of theResponse Surface. For more information on how to insert verification points as refinement points, seeUsing Verification Points (p. 96).

Kriging Auto-Refinement Properties

Refinement Properties

The Refinement properties in the Response Surface Properties view determine the number and thespread of the refinement points. To access these properties, select Response Surface in the Outline

view. The properties are in the Refinement section of the Properties view.

• Maximum Number of Refinement Points: Maximum number of refinement points that can be generatedfor use with the Kriging algorithm.

• Number of Refinement Points: Number of existing refinement points.

• Maximum Predicted Relative Error (%): Maximum predicted relative error that is acceptable for allparameters.

• Predicted Relative Error (%): Predicted relative error for all parameters.

• Converged: Indicates the state of the convergence. Possible values are Yes and No.

Refinement Advanced Options

The Refinement Advanced Options in the Response Surface Properties view determine how outputvariables are considered and the minimum allowable distance between refinement points. To accessthese properties, select Response Surface in the Outline view. The properties are in the Refinement

Advanced Options section of the Properties view.

• Output Variable Combinations:

– Maximum Output: Only the output with the largest Predicted Relative Error is considered. Only onerefinement point is generated in each iteration.

– All Outputs: All outputs are considered. Multiple refinement points are generated in each iteration.

– Sum of Outputs: The combined Predicted Relative Error of all outputs is considered. Only one refinementpoint is generated in each iteration.

• Crowding Distance Separation Percentage: Minimum allowable distance between new refinementpoints, implemented as a constraint in the search for refinement points. If two candidate refinement points

81Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Meta-Model Types

Page 90: Design Exploration Users Guide

are closer together than the defined minimum distance, only the first candidate is inserted as a new re-finement point.

Output Parameter Settings

The output parameter settings in the Refinement section of the Output Parameter Properties viewdetermine how the Maximum Predicted Relative Error applies to output parameters. Auto-refinementcan be set for one or more output parameters, depending on the Output Variable Combinations se-lected. To access these settings, select an output parameter in the Outline view. The settings are inthe Refinement section of the Properties view for the output parameter.

• Inherit From Model Settings: Indicates if the Maximum Predicted Relative Error defined by the user atthe Model level is applicable for this output parameter. Possible values are Yes and No. Default value isYes.

• Maximum Predicted Relative Error: Displays only when the Inherit From Model Settings property isset to No. Maximum predicted relative error that the user will accept for this output parameter. This valuecan be different than the Maximum Predicted Relative Error defined at the Model level.

• Predicted Relative Error: Predicted Relative Error for this output parameter.

Kriging Convergence Curves Chart

The Kriging Convergence Curves Chart allows you to monitor the automatic refinement process of theKriging Response Surface for one or more selected output parameters. The X-axis displays the numberof refinement points used to refine the Response Surface, and the Y-axis displays the percentage of theCurrent Predicted Relative Error. The chart is automatically inserted in the Metrics folder in the outlineand is dynamically updated while Kriging refinement runs.

There are two curves for each output parameter: one curve to represent the percentage of the CurrentPredicted Relative Error, and one curve to represent the Maximum Predicted Relative Error requiredthat parameter.

Additionally, there is a single curve that represents the Maximum of the Predicted Relative Error foroutput parameters that are not converged.

You can stop/interrupt the process with the progress bar to adjust the requested Maximum Error (orto change chart properties) during the run.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.82

Using Response Surfaces

Page 91: Design Exploration Users Guide

Non-Parametric Regression

Non-parametric regression provides improved response quality and is initialized with one of the availableDOE types.

The non-parametric regression (NPR) algorithm is implemented in ANSYS DesignXplorer as ametamodeling technique prescribed for predictably high nonlinear behavior of the outputs with respectto the inputs.

NPR belongs to a general class of Support Vector Method (SVM) type techniques. These are data classi-fication methods which use hyperplanes to separate data groups. The regression method works almostsimilarly, the main difference being that the hyperplane is used to categorize a subset of the inputsample vectors which are deemed sufficient to represent the output in question. This subset is calledthe “support vector” set.

In the current version, the internal parameters of the meta-model are fixed to constant values and arenot optimized. The values are determined from a series of benchmark tests and strike a compromisebetween the meta-model accuracy and computational speed. For a large family of problems, the currentsettings will provide good results; however, for some problem types (like ones dominated by flat surfacesor lower order polynomials), some oscillations may be noticed between the DOE points.

In order to circumvent this, you can use a larger number of DOE points or, depending on the fitnesslandscape of the problem, use one of the several optimal space filling DOEs provided. In general, it issuggested that the problems first be fitted with a quadratic response surface, and the NPR fitting adoptedonly when the Goodness of Fit metrics from the quadratic response surface model is unsatisfactory.This will ensure that the NPR is only used for problems where low order polynomials do not dominate.

Neural Network

This mathematical technique is based on the natural neural network in the human brain.

83Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Meta-Model Types

Page 92: Design Exploration Users Guide

In order to interpolate a function, we build a network with three levels (Inp, Hidden, Out) where theconnections between them are weighed, like this:

Each arrow is associated with a weight (w) and each ring is called a cell (like a neural).

If the inputs are xi, the hidden level contains function gj (xi) and the output solution is also:

k i jk j i= ∑

where K is a predefined function, such as the hyperbolic tangent or an exponential based function, inorder to obtain something similar to the binary behavior of the electrical brain signal (like a step function).The function is continuous and differentiable.

The weight functions (wjk) are issued from an algorithm which minimizes (as the least squares method)

the distance between the interpolation and the known values (design points). This is called the learning.The error is checked at each iteration with the design points which are not used for the learning. Weneed to separate learning design points and error checking design points.

The error decreases and then increases when the interpolation order is too high. The minimization al-gorithm is stopped when the error is the lowest.

This method uses a limited number of design points to build the approximation. It works better whenthe number of design points and the number of intermediate cells are high. And it can give interestingresults with several parameters.

Sparse Grid

The Sparse Grid meta-model provides refinement capabilities for continuous parameters, includingthose with Manfacturable Values (not supported for discrete parameters). Sparse Grid uses an adaptiveresponse surface, which means that it refines itself automatically. A dimension-adaptive algorithm allowsit to determine which dimensions are most important to the objectives functions, thus reducing com-putational effort.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.84

Using Response Surfaces

Page 93: Design Exploration Users Guide

How Sparse Grid Works

Sparse Grid allows you to select certain refinement criteria. When you update the response surface,Sparse Grid uses an automated local refinement process to determine the areas of the response surfacethat are most in need of further refinement. It then concentrates refinement points in these areas, al-lowing the response surface to reach the specified level of accuracy more quickly and with fewer designpoints.

The Sparse Grid adaptive algorithm is based on a hierarchy of grids. The Sparse Grid Initialization

DOE type generates a DOE matrix containing all the design points for the smallest required grid: thelevel 0 (the point at the current values) plus the level 1 (two points per input parameters). If the expectedlevel of quality is not met, the algorithm further refines the grid by building a new level in the corres-ponding directions. This process is repeated until one of the following things occurs:

• The response surface reaches the requested level of accuracy.

• The maximum number of refinement points (as defined by the Maximum Number of Refinement Points

property) has been reached.

• The maximum depth (the maximum number of levels that can be created in the hierarchy, as defined bythe Maximum Depth property) is reached in one level. Once the maximum depth for a direction is reached,there is no further refinement in that direction.

The relative error for an output parameter is the error between the predicted and the observed outputvalues, normalized by the known maximum variation of the output parameter at this step of the process.Since there are multiple output parameters to process, DesignXplorer computes the worst relative errorvalue for all of the output parameters and then compares this against the maximum relative error (asdefined by the Maximum Relative Error property). So long as at least one output parameter has a rel-ative error greater than the expected error, the maximum relative error criterion is not validated.

Sparse Grid Requirements

Sparse Grid response surface requires a specific Clenshaw-Curtis grid that is generated only by theSparse Grid Initialization DOE type, so the Sparse Grid response surface can only be used with theSparse Grid Initialization DOE; if you’ve defined another type of DOE, the Sparse Grid response surfacecannot be updated. Note, however, that the Sparse Grid Initialization DOE type can be used by othertypes of response surfaces.

Because Sparse Grid uses an automatic refinement algorithm, it is not possible to add refinement pointsmanually. As a result, for a Sparse Grid response surface:

• The Sparse Grid Refinement Type property is set to Auto and cannot be changed.

• The Insert as Refinement Point operation is not available from the right-click context menu. Also, if youuse commands to attempt to insert a refinement point, you will receive an error.

Related Topics:

Sparse Grid Algorithms (p. 221)Using Sparse Grid RefinementSparse Grid Auto-Refinement PropertiesSparse Grid Convergence Curves Chart

85Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Meta-Model Types

Page 94: Design Exploration Users Guide

Using Sparse Grid Refinement

When using Sparse Grid auto-refinement capabilities, there are four main steps to the process: settingup the Design of Experiments, setting up the Response Surface, updating the Response Surface, andgenerating verification points. Each step is addressed below.

Setting Up the Design of Experiments

To set up the Design of Experiments for Sparse Grid auto-refinement:

1. In the Outline view for the Design of Experiments, select the Design of Experiments node.

2. In the Design of Experiments section of the Properties view, set the Design of Experiments Type toSparse Grid.

3. Either preview or update the Design of Experiments to validate your selections. This will cause the Re-

sponse Surface Type property for downstream Response Surface components to default Sparse Grid.

Note

If the DOE component is shared by multiple systems, the DOE definition will apply to theResponse Surface components for each of those systems.

Setting Up the Response Surface

To set up refinement properties for a Sparse Grid auto-refinement:

1. In the Outline view for the Response Surface, select the Response Surface node.

2. In the Meta Model section of the Properties view, verify that the Response Surface Type property isset to Sparse Grid.

3. In the Refinement section of the Properties view:

• Enter the Maximum Relative Error (%) to be allowed for all of the output parameters. The smallerthe value, the more accurate the response surface will be.

• Enter the Maximum Depth or number of grid levels that can be created in a given direction. (You canalso adjust this value later, as needed according to your update results.)

• Enter the Maximum Number of Refinement Points. Specify the maximum number of refinementpoints that can be generated as part of the refinement process.

For detailed descriptions of Sparse Grid response surface Properties, see Sparse Grid Auto-RefinementProperties (p. 87).

Updating the Response Surface

Once you’ve defined the Design of Experiments and Response Surface, update the Response Surfaceby clicking the Update button in the toolbar or right-clicking the component cell and selecting Update.This begins the adaptive process that will generate the Sparse Grid response surface.

At any time, you can interrupt the Sparse Grid refinement via the Progress bar to change propertiesor to see partial results; the refinement points that have already been calculated are visible and the

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.86

Using Response Surfaces

Page 95: Design Exploration Users Guide

displayed charts are based on the Response Surface’s current level of refinement. The refinement pointsthat have not been calculated are displayed with an Update Required icon to indicate which outputparameters require an update.

If a design point fails during the refinement process, Sparse Grid stops refining in the area where thefailed design point is located, but continues to refine in the rest of the parameter space to the degreepossible. Failed refinement points are indicated by the Update Failed, Update Required icon. You canattempt to pass the failed design points by re-updating the Response Surface.

The auto-refinement process continues until the Maximum Relative Error (%) objective is attained,the Maximum Depth limit is reached for all input parameters, the Maximum Number of Refinement

Points is reached, or the response surface converges. If all output parameters have a Maximum Relative

Error (%) that is higher than the Current Relative Error defined for them in the Properties view, therefinement is converged.

Note

If the Sparse Grid refinement does not appear to be converging, it is possible to accept thecurrent level of convergence. To accept the current level of convergence:

1. Stop the process.

2. Either set the Maximum Relative Error (%) value slightly above the Current Relative Error

(%) value or set the Maximum Number of Refinement Points value equal to the Number

of Refinement Points value.

3. Update the Response Surface again.

Generating Verification Points

As with the Kriging response surface, the Sparse Grid is an interpolation. To assess the quality of theresponse surface, verification points are needed. Once the Sparse Grid auto-refinement has converged,it is recommended that you generate verification points to validate the response surface accuracy.

Note

Since the Sparse Grid algorithm is an automatic refinement process, you cannot add refine-ment points manually as with other DOE types. To generate verification points automatically,select the Generate Verification Points check box in the Response Surface Properties viewand then update the response surface.

For more detailed information on assessing the quality of a response surface, see Goodness of Fit (p. 92)and Using Verification Points (p. 96).

Sparse Grid Auto-Refinement Properties

The Refinement properties in the Sparse Grid response surface Properties view determine the numberand the spread of the refinement points.

• Refinement Type: Type of refinement process. Defaults to Auto (read-only).

87Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Meta-Model Types

Page 96: Design Exploration Users Guide

• Maximum Relative Error (%): Maximum relative error allowable for the response surface. This value isused to compare against the worst relative error obtained for all output parameters. So long as any outputparameter has a relative error greater than the expected relative error, this criterion is not validated. Thisproperty is a percentage and defaults to 5.

• Maximum Depth: Maximum depth (number of hierarchy levels) that can be created as part of the SparseGrid hierarchy. Once the number of levels defined in this property is reached for a direction, refinementdoes not continue in that direction. Defaults to 4. A minimum value of 2 is required (because the SparseGrid Initialization DOE type is already generating levels 0 and 1).

• Maximum Number of Refinement Points: Maximum number of refinement points that can be generatedfor use with the Sparse Grid algorithm. Defaults to 1000.

• Number of Refinement Points: Number of existing refinement points.

• Current Relative Error (%): Current level of relative error.

• Converged: Indicates the state of the convergence. Possible values are Yes and No.

Sparse Grid Convergence Curves Chart

The Sparse Grid Convergence Curves chart allows you to monitor the automatic refinement process ofthe Sparse Grid Response Surface: on X-axis the number of design points used to build the responsesurface and on Y-axis the convergence criteria. This chart is automatically inserted in the Metrics folderin the outline and it is dynamically updated while Sparse Grid runs.

There is one curve per output parameter to represent the current relative error (in percentage), andone curve to represent the maximum relative error for all direct output parameters. The automatic re-finement stops when the maximum relative error required (represented as a horizontal threshold line)has been met.

You can disable one or several output parameters curves and keep only the curve of the maximum re-lative error.

During the run, you can use the Progress Bar to stop or interrupt the process so that you can adjustthe requested maximum error (or change chart properties) before continuing.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.88

Using Response Surfaces

Page 97: Design Exploration Users Guide

Meta-Model Refinement

The quality of your response surface is affected by the type of meta-modeling algorithm used to generateit and is measured by the response surface’s Goodness of Fit. The following sections provide recom-mendations on making your initial selection of a meta-modeling algorithm, evaluating the meta-modelperformance in terms Goodness of Fit, and changing the meta-model as needed to improve theGoodness of Fit.

Related Topics:

Working with Meta-ModelsChanging the Meta-ModelPerforming a Manual Refinement

Working with Meta-Models

Prepare Your Model to Enhance Goodness of Fit

• Use a sufficient number of design points The number of design points should always exceed thenumber of inputs. Ideally, you should have at least twice as many design points as inputs. Most of thestandard DOE types are designed to generate a sufficient number, but custom DOE types may not.

• Reduce the number of input parameters Only keep the input parameters that are playing a majorrole in your study. Disable any inputs that are not relevant by deselecting them in the Design of ExperimentsOutline view. You can determine which are relevant from a correlation or sensitivity analysis.

Requirements and recommendations regarding the number of input parameters vary according tothe DOE type selected. For more information, see Number of Input Parameters for DOE Types (p. 69).

• As a general rule, always create verification points To specify that verification points will be createdwhen the response surface is generated, select the Generate Verification Points check box in the ResponseSurface Properties view.

Start with a Standard Response Surface

In practice, it’s a good idea to always begin with the standard response surface produced by theStandard Response Surface 2nd-Order Polynomial meta-model. Once the response surface is built,assess its quality by reviewing its Goodness of Fit information.

Review Goodness of Fit Information

To assess the quality of the response surface, you should review the Goodness of Fit information foreach output parameter. To check Goodness of Fit information:

1. On the Project Schematic, right-click the Response Surface cell and select Edit from the context menu.

2. Under the Metrics node in the response surface Outline view, select Goodness of Fit.

Goodness of Fit information for each output parameter displays in the Table view and is illustratedin the Predicted vs. Observed chart in the Chart view.

3. Review the Goodness of Fit information for the response surface, paying particular attention to theCoefficient of Determination property in the Table view. The closer this value is to 1, the better theresponse surface. For details on different criteria, see Goodness of Fit Criteria (p. 93).

89Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Meta-Model Refinement

Page 98: Design Exploration Users Guide

• If the Goodness of Fit is acceptable, add verification points and then recheck the Goodness of Fit. Ifneeded, you can refine the response surface further manually. See Performing a Manual Refine-ment (p. 92) for details.

• If the Goodness of Fit is poor, try changing your meta-model. See Changing the Meta-Model (p. 90)for details.

Changing the Meta-Model

Changing meta-model types or changing the available options for the current meta-model can improvethe Goodness of Fit. To change your meta-model type:

1. On the Project Schematic, right-click the Response Surface cell and select Edit from the context menu.

2. In the response surface Outline view, select the Response Surface node.

3. Under Meta-Model in the Properties view, select a different value for Response Surface Type.

4. Update the response surface either by clicking the Update button in the Toolbar or by right-clicking theResponse Surface node and selecting Update from the context menu.

5. Review the Goodness of Fit information for each output parameter to see if the new meta-model providesa better fitting.

Once you’ve achieved the desired level of Goodness of Fit, add verification points and then recheckthe Goodness of Fit. If needed, you can refine the response surface further manually. See Performing aManual Refinement (p. 92) for details.

Guidelines for Changing the Meta-Model

Kriging

If the Standard Response Surface 2nd-Order Polynomial meta-model does not produce a responsesurface with the desired level of Goodness of Fit, try the Kriging meta-model.

After updating the response surface with the Kriging method selected, recheck the Goodness ofFit.

For Kriging, the Coefficient of Determination must have a value of 1. If it does not, this meansthat the model is over-constrained and not suitable for refinement via the Kriging algorithm.

Kriging fits the response surface through all the design points, so many of the other metrics willalways be perfect. Therefore, it is particularly important to run verification points with Kriging.

Non-Parametric Regression

If the model is overconstrained and not suitable for refinement via Kriging, try switching to the Non-

Parametric Regression meta-model.

Other Meta-Models

If you decide to use one of the other meta-models, consider your selection carefully to ensure that themeta-model suits your specific purpose. See Meta-Model Characteristics (p. 91) for details.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.90

Using Response Surfaces

Page 99: Design Exploration Users Guide

Meta-Model Characteristics

Although it is recommended to always start with the Standard Response Surface 2nd-Order Polyno-

mial meta-model and then switch to the Kriging meta-model, you can opt to use one of the othermeta-model types.

Keep in mind that different meta-model types have varying characteristics, and so lend themselves todifferent types of responses; the meta-model used impacts the quality and Goodness of Fit of the res-ulting response surface. Use the characteristics noted below to guide your selection of the meta-modelmost suited to your design scenario.

Once you determine which meta-model has the best fit for your particular application, you could usethat as your new default for similar project in the future.

Standard Response Surface 2nd-Order Polynomial

• Default meta-model; creates a Standard Response Surface.

• Effective when the variation of the output is smooth with regard to the input parameters.

Sparse Grid

• Suited for studies containing discontinuities.

• Use when solve is fast.

Non-Parametric Regression

• Suited to nonlinear responses.

• Use when results are noisy.

• Typically slow to compute.

Neural Network

• Suited to highly nonlinear responses.

• Use when results are noisy.

• Control over the algorithm is very limited.

Kriging

• Efficient in a large number of cases.

• Suited to highly nonlinear responses.

• Do NOT use when results are noisy; Kriging is an interpolation that matches the points exactly.

• Always use verification points to check Goodness of Fit.

91Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Meta-Model Refinement

Page 100: Design Exploration Users Guide

Performing a Manual Refinement

With the exception of Sparse Grid, the manual refinement capability is available for all of the meta-model types. Manual refinement is a way to force the response surface to take into account points ofyour choice, in addition to the points already in the Design of Experiments.

Basically, after a first optimization study, you may want to insert the best candidate design as a refinementpoint in order to improve the response surface quality in this area of the design space.

The refinement points are basically the same as design points since, to obtain the output parametervalues of a refinement point, a design point update (“real solve”) is performed by DesignXplorer.

You can also create a refinement point easily from existing results, with the operation Insert as Refine-

ment Point which is available in the contextual menu when you select relevant entities in the Table

view or the Chart view with the right mouse button. For instance, you can right-click on a responsepoint or a Candidate Point and insert it as refinement point.

The refinement points are listed in the table view of the response surface, in a table entitled Refinement

Points. You can insert a new refinement point directly in this table by entering the values of the inputparameters.

To update the refinement points and rebuild the response surface to take them into account them,click the Update button in the toolbar. Each out-of-date refinement points is updated and then theresponse surface is rebuilt from the Design of Experiments points and the refinement points.

With manual refinement, you can insert a refinement point in the refinement points table and beginningwith ANSYS release 13.0 you do not need to do an initial solve of the response surface (without refine-ment point) before updating your Response Surface with your manual refinement.

You can change the edition mode of the Table of Refinement Points in order to edit the outputparameter values. You can also copy/paste data and import data from a CSV file (right-click and selectImport Refinement Points).

See Working with Tables (p. 205) for more information.

Goodness of Fit

You can view Goodness of Fit information for any of the output parameters in a response surface. Todo so, edit the Response Surface cell from the Project Schematic and select the Goodness of Fit objectunder the Metrics section of the Outline view.

If any of the input parameters is discrete, a different response surface is built for each combination ofthe discrete levels and the quality of the response surface might be different from on configuration toanother. To review the Goodness of Fit for each of these different response surfaces, select the discreteparameter values in the Properties view. The Table and Chart views will display the information cor-responding to the associated response surfaces.

Goodness of Fit is closely related to the meta-model algorithm used to generate the response surface.If the Goodness of Fit is not of the expected quality, you can try to improve it by adjusting the meta-model. See Changing the Meta-Model (p. 90) for more information.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.92

Using Response Surfaces

Page 101: Design Exploration Users Guide

Goodness of Fit Criteria

In the Table view, the Response Surface Goodness of Fit information is displayed for each outputparameter. The following criteria are calculated for the points taken into account in the constructionof the response surface which are the Design of Experiments points and the refinement points:

• Coefficient of Determination (R2

): The percent of the variation of the output parameter that can be ex-plained by the response surface regression equation. That is, the Coefficient of Determination is the ratioof the explained variation to the total variation. The best value is 1.

The points used to create the response surface are likely to contain variation for each output para-meter (unless all the output values are the same, which will result in a flat response surface). Thisvariation is illustrated by the response surface that is generated. If the response surface were to passdirectly through each point (which is the case for the Kriging meta-model), the coefficient of determ-ination would be 1, meaning that all variation is explained.

Mathematically represented as:

(1)

2

1

2

1

−−∑

−∑

=

=

^i i

i

N

i ii

N

where: (p. 94)

• Adjusted Coefficient of Determination (Adjusted R2

): The Adjusted Coefficient of Determination takesthe sample size into consideration when computing the Coefficient of Determination. The best value is1.

Usually this is more reliable than the usual coefficient of determination when the number of samplesis small ( <30). Only available for standard response surfaces.

Mathematically represented as:

−−− −

−∑

−∑

=

=

� ��

� ��

where: (p. 94)

• Maximum Relative Residual: The maximum distance (relatively speaking) out of all of the generatedpoints from the calculated response surface to each generated point. The best value is 0%; in general,the closer the value is to 0%, the better quality of the response surface.

However, in some situations, you can have a larger value and still have a good response surface. Thismay be true, for example, when the mean of the output values is close to zero. See the formula below.

Mathematically represented as:

Max

� �Abs

y y

y

� �

�=−

where: (p. 94)

93Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goodness of Fit

Page 102: Design Exploration Users Guide

• Root Mean Square Error: This is the square root of the average square of the residuals at the DOE pointsfor regression methods. The best value is 0; in general, the closer the value is to 0, the better quality ofthe response surface.

Mathematically represented as:

2

1i i

i

N^−∑

=

where: (p. 94)

• Relative Root Mean Square Error: This is the square root of the average square of the residuals scaledby the actual output values at the points for regression methods. The best value is 0%; in general, thecloser the value is to 0%, the better quality of the response surface.

However, in some situations, you can have a larger value and still have a good response surface. Thismay be true, for example, when part of the output values are close to zero. For example, you canyou can obtain 100% of relative error if the observed value = 1e-10 and the predicted value = 1e-8,but if the range of output values is 1, this error becomes negligible.

Mathematically represented as:

� �

��

� −

=

where: (p. 94)

• Relative Maximum Absolute Error: This is the absolute maximum residual value relative to the standarddeviation of the actual output data, modified by the number of samples. The best value is 0%; in general,the closer the value is to 0%, the better quality of the response surface.

Note that this value and the Relative Average Absolute Error value correspond to the MaximumError and Average Absolute Error scaled by the standard deviation. For example, the Relative Root

Mean Square Error becomes negligible if both of these values are small.

Mathematically represented as:�σy � �

� �=

−( ):

where: (p. 94)

• Relative Average Absolute Error: This is the average of the residuals relative to the standard deviationof the actual outputs. This is useful when the number of samples is low ( <30). The best value is 0%; ingeneral, the closer the value is to 0%, the better quality of the response surface.

Note that this value and the Relative Maximum Absolute Error value correspond to the MaximumError and Average Absolute Error scaled by the standard deviation. For example, the Relative Root

Mean Square Error becomes negligible if both of these values are small.

Mathematically represented as: σ

� ��

� −∑

=

where: (p. 94)

yi = value of the output parameter at the i-th sampling point

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.94

Using Response Surfaces

Page 103: Design Exploration Users Guide

i = value of the output parameter at the i-th sampling point

�^

= value of the regression model at the i-th sampling point

is the arithmetic mean of the values �

σy is the standard deviation of the values �

ɶA X A�

= − 2 = number of sampling points

g � Y j m� ( ) ≤ = …= number of polynomial terms for a quadratic response surface

(not counting the constant term)

A graphical indication is given in each table cell as to how close the parameter comes to the ideal valuefor each Goodness of Fit characteristic. Three gold stars indicate the best match , while three red crossesindicate the worst.

Note

Root Mean Square Error has no rating because it is not a bounded characteristic.

Related Topics:

Predicted versus Observed ChartAdvanced Goodness of Fit ReportUsing Verification Points

Predicted versus Observed Chart

In the Chart view, a scatter chart presents for each output parameter the values predicted from theresponse surface versus the values observed from the design points. This chart lets you quickly determineif the response surface correctly fits the points of the Design of Experiments and the refinement table:the closer the points are to the diagonal line, the better the response surface fits the points.

95Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goodness of Fit

Page 104: Design Exploration Users Guide

By default, all of the output parameters are displayed on the chart and in this case, the output valuesare normalized. Use the check boxes in the Properties view to remove or add an output parameterfrom the chart. If only one output parameter is plotted, the values are not normalized.

If you position the mouse cursor over a point of the chart, the corresponding parameter values appearin the Properties view, including the predicted and observed values for the output parameters.

On the same chart, you also visualize the verification points. These points are not taken into accountto build the response surface, so if they appear close to the diagonal line, the response surface is correctlyrepresenting the parametric model. Otherwise, it means that there are not enough data provided forthe response surface to catch the parametric behavior of the model: it is needed to refine the responsesurface.

If you right-click a point of the chart, you can insert it as a refinement point, in order to take it into ac-count in the response surface. This is a good way to improve the response surface around a verificationpoint which shows an insufficient Goodness of Fit.

You can add additional Goodness of Fit objects to the Metrics section of the Outline. Simply right-clickon the Metrics folder and select Insert Goodness of Fit. A new Goodness of Fit object will be addedwith its own table and chart. You can also delete Goodness of Fit objects from the Outline by right-clicking on the object and selecting Delete.

It is possible to duplicate a Goodness of Fit by selecting it from the outline view and either choosingDuplicate in the contextual menu or using the drag and drop mechanism. This operation will attemptan update of the Goodness of Fit so that the duplication of an up-to-date Goodness of Fit results in anup-to-date Goodness of Fit.

Advanced Goodness of Fit Report

The Advanced Goodness of Fit report is available only for the standard response surface generatedby the Standard Response Surface 2nd-Order Polynomial meta-model algorithm. It can be displayedfor any output parameter.

To view the Advanced Goodness of Fit report for a given output parameter, right-click on the columnfor that parameter in the Table view and select Generate Advanced Goodness of Fit Report from thecontext menu.

When reviewing the report, if the Maximum VIF for Full Regression Model value is very high (>10),this means that there is a high interdependency among the terms and the response surface is not

unique. In other words, the response surface is not reliable in spite of the good R2 value. In this case,you should add more points to enrich the response surface.

Using Verification Points

Response surfaces are built from a Design of Experiments (DOE). The Goodness of Fit calculationscompare the response surface outputs with the DOE results used to create them. For response surfacetypes that try to find the "best fit" of the response surface to these DOE points (such as Full 2nd-OrderPolynomial), you can get an idea of how well the fit was accomplished. However, for interpolated responsesurface methods that force the response surface to pass through all of the DOE points (such as Kriging),the Goodness of Fit will usually appear to be perfect. In this case, Goodness of Fit indicates that theresponse surface passed through the DOE points used to create it, but does not indicate whether theresponse surface captures the parametric solution.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.96

Using Response Surfaces

Page 105: Design Exploration Users Guide

A better way of verifying that the response surface accurately approximates the output parameter valuesis to compare the predicted and observed values of the output parameters with verification points.These points are separately calculated after the response surface and are useful in validating any typeof response surface. In particular, you should always use verification points to validate the accuracy ofinterpolated response surfaces, such as Kriging or Sparse Grid.

You can specify that verification points are generated automatically by selecting the Generate Verific-

ation Points check box in the Response Surface Properties view. When this check box is selected, theNumber of Verification Points property allows you to specify the number of verification points to begenerated.

You can insert a new verification point directly into the Table of Verification Points. To do so, selectthe Response Surface cell on the Response Surface Outline view and then enter the values of the inputparameters into the New Verification Point row of the Table view. Update the verification points andGoodness of Fit information by right-clicking on the row and selecting Update from the context menu.

After the response surface is created, the verification points are placed in locations that maximize thedistance from existing DOE points and refinement points (Optimal Space-Filling algorithm). A designpoint update (i.e., a “real solve”) calculates each verification point. These verification point results arethen compared with the response surface predictions and the difference is calculated.

The results are displayed in the Goodness of Fit Table view and Chart view. The Table view displaysGoodness of Fit criteria calculated for the verification points (i.e., only the verification points are comparedwith the response surface in these calculations). From the Goodness of Fit information, you can determinewhether the response surface accurately approximates output parameter values at each of the verificationpoint locations. The Chart view also shows the verification points in the Predicted vs. Observed chart.

If a verification point reveals that the current response surface is of a poor quality, you can insert it asa refinement point so that it is taken into account to improve the accuracy of the response surface. Todo so, select the Response Surface cell on the Response Surface Outline view. Right-click on the veri-fication point in the Table view and select Insert as Refinement Point from the context menu.

Note

• The verification points are not used to build the response surface until they are turned intorefinement points and the response surface is recalculated.

• The Insert as Refinement Point option is not available for Kriging and Sparse Grid responsesurfaces.

By default, the output parameters of the verification point table are grayed out and are only filled inby a real solve. However, you can change the edition mode of the Table of Verification Points in orderto edit the output parameter values. This will allow you to enter verification points manually, ratherthan by performing on real solves, and still compare them with the response surface. You can alsocopy/paste data and import data from a CSV file by right-clicking and selecting Import Verification

Points from the context menu). This is a way to compare either experimental data or data run in anothersimulation with the simulation response surface.

See Working with Tables (p. 205) for more information.

97Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goodness of Fit

Page 106: Design Exploration Users Guide

Min-Max Search

The Min-Max Search examines the entire output parameter space from a Response Surface to approx-imate the minimum and maximum values of each output parameter. In the Outline, if the Enable boxis checked, a Min-Max Search will be performed each time the Response Surface is updated. Uncheckthe box to disable the Min-Max Search feature. You may want to disable this feature in cases wherethe search could be very time-consuming (i.e., if there are a large number of input parameters or ifthere are discrete or Manufacturable Values input parameters).

Note

If you have discrete or Manufacturable Values input parameters, an alert will be displayedbefore a Min-Max Search is performed, reminding you that the search may be time-consuming. If you do not want to be shown this message before each Min-Max Searchincluding discrete or Manufacturable Values parameters, you can disable it in the DesignExploration Messages section of the Workbench Options dialog (accessed via Tools >

Options > Design Exploration).

Before updating your Response Surface, set the options for the Min-Max Search. Click on the Min-Max

Search node and set the following options in the Properties view:

• Enter the Number of Initial Samples to generate for the optimization.

• Enter the Number of Start Points to use for the Min-Max search algorithm. The more start pointsentered, the longer the search time.

Once your Response Surface has been updated, select the Min-Max Search cell in the Outline to displaythe sample points that contain the calculated minimum and maximum values for each output parameter(they will be shown in the Response Surface Table). The maximum and minimum values in the outputparameter Properties view will also be updated based on the results of the search. If the responsesurface is updated in any way, including changing the fitting for an output parameter or performing arefinement, a new Min-Max Search is performed.

The Min-Max Search uses the NLPQL algorithm to search the parameter space for the maximum andminimum values. Assuming that all input parameters are continuous, one search is done for the minimum

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.98

Using Response Surfaces

Page 107: Design Exploration Users Guide

value and one for the maximum, and if Number of Starting Points is more than one, two searches aredone (minimum and maximum) for each starting point. To find the minimum or maximum of an output,the generated sample set is sorted and the "n" first points of the sort are used as the starting points.The NLPQL algorithm is run twice for each starting point, once for the minimum and once for themaximum.

If there are discrete input parameters or continuous input parameters with Manufacturable Values, thenumber of searches increases by the number of discrete/continuous with Manufacturable Values para-meter combinations (there will be two searches per combination, one for minimum and one for maxim-um). So if there are two discrete parameters, one with 4 levels and one with three, the number ofsearches done is 4 * 3 * 2 = 24. This example shows a search with only one starting point defined. Ifyou add multiple starting points, you will multiply the number of searches accordingly.

When you select the Min-Max Search cell in the outline, the sample points that contain the minimumand maximum calculated values for each output parameter are shown in the Response Surface table.These sample points can be saved to the project by selecting Insert as Design Point(s) from the right-click menu. The sample points can also be saved as response points (or as refinement points to improvethe Response Surface) by selecting Explore Response Surface at Point from the right-click contextmenu.

The sample points obtained from a Min-Max search are used by the Screening optimization method.If you run a Screening optimization, the samples are automatically taken into account in the sampleset used to run or initialize the optimization. For details, see Performing a Screening Optimization (p. 130).

You can disable the Min-Max Search cell by deselecting the box in the Enabled column in the Outline.If you disable the Min-Max Search, no search is done when the Response Surface is updated. If youdisable the search after performing an initial search, the results from the initial search will remain inthe output parameter properties and will be shown in the Response Surface Table view when you clickon the Min-Max Search cell.

Note

• If no solutions have occurred yet, no Minimum or Maximum values are shown.

• If the Design of Experiments object is solved, but the Response Surface is not solved, theminimum and maximum values for each output parameter are extracted from the DOE solution’sdesign points and displayed in the Properties for the output parameters.

• For discrete parameters and continuous parameters with Manufacturable Values, there is onlyone Minimum and Maximum value per output parameter even if a discrete parameter has manylevels (there is not one Min-Max value set per combination of discrete/continuous with Manu-facturable Values parameter values).

Response Surface Charts

The Response Surface provides four types of chart allowing the exploration of the design space bygraphically viewing the impact that parameters have on one another: Spider chart, Response chart,Local Sensitivity chart, and Local Sensitivity Curves chart.

When a Response Surface is updated, one response point and one of each of the chart types are createdautomatically. You can insert as many response points and charts as you want, either from the Toolbox

view or from the right-click context menu:

99Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 108: Design Exploration Users Guide

• When you click on a response point in the Outline view, the Toolbox displays all the chart types that areavailable for the cell. To insert another chart, drag a chart template out of the Toolbox and drop it onthe response point.

• When you right-click on a response point cell in the Outline view, you can select the Insert option forthe type of chart you want to add.

• To duplicate a chart that already exists in the Outline view, you can either:

– Right-click on the chart and select Duplicate from the context menu. This operation creates a duplicatechart under the same response point.

– Drag the chart and drop it on a response point. This operation allows you to create a duplicate chartunder the response point of your choice.

Note

Chart duplication triggers a chart update; if the update succeeds, both the original chart andthe duplicate will be up-to-date.

Once you’ve created a chart, you can change the name of a chart cell by double-clicking it and enteringthe new name. (This will not affect the title of the chart, which is set as part of the chart properties.)You can also save a chart as a graphic by right-clicking it and selecting Save Image As. See Saving aChart for more information.

Each chart provides the ability to visually explore the parameter space by using the input parametersliders (or drop-down menus for discrete parameters and continuous parameters with Manufacturable

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.100

Using Response Surfaces

Page 109: Design Exploration Users Guide

Values) available in the chart's Properties view. These sliders allow you to modify an input parametervalue and view its effect on the displayed output parameter.

All of the Response charts under a Response Point on the Chart view use the same input parametervalues because they are all based on the current parameter values for that response point. Thus, whenyou modify the input parameter values, the response point and all of its charts are refreshed to takethe new values into account.

Related Topics:

• Using the Response Chart (p. 101)

• Using the Spider Chart (p. 112)

• Using Local Sensitivity Charts (p. 112)

Using the Response Chart

The Response chart is a graphical representation that allows you to see how changes to each inputparameter affect a selected output parameter. In this chart, you select an output parameter to be dis-played on one axis, and (depending on the chart mode used) select one or more input parameters todisplay on the other axes.

Once your input(s) and output are selected, you can use the sliders or enter values to change the valuesof the input parameters that are not selected in order to explore how these parameters affect the shapeand position of the curve.

Additionally, you can opt to display all the design points currently in use (from the DOE and the ResponseSurface refinement) by selecting the Show Design Points check box in the Properties view. With asmall number of input parameters, this option can help you to evaluate how closely the Response Surfacefits the design points in your project.

Response Chart Modes

The Response chart has three different viewing modes: 2D Contour Graph, 3D Contour Graph, and2D Slices.

• The 2D mode is the 2D Contour Graph, a two-dimensional graphic that allows you to view how changesto single input impact a single output. For details, see Using the 2D Contour Graph Response Chart (p. 104).

• The 3D mode is the 3D Contour Graph, a three-dimensional graphic that allows you to view how changesto two inputs impact a single output. For details, see Using the 3D Contour Graph Response Chart (p. 105).

• The 2D Slices mode combines the benefits of the 2D and 3D Contour Graph modes by compressing allthe data contained in a three-dimensional surface in an easy-to-read two-dimensional format. It displaysan input on the X axis, an input on the “Slice” axis, and a selected output on the Y axis. The input on theSlice axis is calculated from either the number of slices defined or the number of Manufacturable Valuesor the number of discrete parameter levels. For details, see Using the 2D Slices Response Chart (p. 107).

When you solve a Response Surface, a Response chart is automatically added for the default Response

Point in the Response Surface Outline view. To add another Response chart (this can be either an ad-ditional chart for the default response point or a chart for a different response point), right-click thedesired Response Point in the Outline view and select Insert Response.

101Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 110: Design Exploration Users Guide

Understanding the Response Chart Display

The graphical rendering of the Response chart varies according to the type (i.e., the Classification

property and the usage of Manufacturable Values) of the parameters selected. When two input para-meters are selected for a Response chart, the display is further determined by the specific combinationof parameter types. This section illustrates how parameters of different types and various combinationsare rendered in the three Response chart modes.

Display by Parameter Type

Input parameters can be continuous, discrete, or continuous with Manufacturable Values; the Responsechart displays each type differently, with a graphical rendering that reflects the nature of the parametervalues. For example:

• Continuous parameters are represented by colored curves that reflect the continuous nature of the para-meter values.

• Discrete parameters are represented by bars that reflect the discrete nature of the parameter values. Thereis one bar for per discrete value.

• Continuous parameters with Manufacturable Values are represented by a combination of curves andmarkers, with transparent gray curves reflecting the continuous values and colored markers reflecting thediscrete nature of the Manufacturable Values. There is one marker per Manufacturable Value.

The examples below show how each type of parameter is displayed on the 2D Response chart.

Display by Parameter Combination

When two input parameters are selected for the Response chart, different parameter type combinationsare displayed differently, with a graphical rendering that reflects the nature of each parameter.

3D Response Chart Parameter Combinations

The 3D Response chart has two inputs and can have combinations of parameters with like types orunlike types. Response chart examples for possible combinations are shown below.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.102

Using Response Surfaces

Page 111: Design Exploration Users Guide

2D Slices Response Chart Parameter Combinations

The 2D Slices Response chart has two inputs. Combinations of these inputs are categorized first by theparameter type of the select input (the X Axis) and then further distinguished by the parameter typeof the calculated input (the Slice Axis). For each X-axis parameter type, there are two different renderings:

• The X axis in conjunction with continuous values (a continuous parameter). In this instance, you specifythe number of curves or “slices.”

• The X-axis in conjunction with discrete values (either a discrete parameter or a Manufacturable Valuesparameter). In this instance, the number of slices is automatically set to the number of discrete levels orthe number of Manufacturable Values.

Response chart examples for possible combinations are shown below.

103Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 112: Design Exploration Users Guide

Using the 2D Contour Graph Response Chart

A Response chart in the 2D Contour Graph mode is a two-dimensional graphic that allows you to viewhow changes to single input impact a single output. Essentially, it is a flattened representation of athree-dimensional Response chart, displaying a selected input on the X axis and a selected output onthe Y axis.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.104

Using Response Surfaces

Page 113: Design Exploration Users Guide

Viewing the 2-D Response Chart

To view a Response chart in the 2D Contour Graph mode:

1. In the Response Surface Outline view, select the Response chart cell.

2. In the Properties view, under Chart:

• Set the Mode property to 2D.

• Set chart resolution by entering a value for the Chart Resolution Along X property.

3. In the Properties view, under Axes:

• For the X Axis property, select an input parameter.

• For the Y Axis property, select an output parameter.

The Response chart will automatically update according to your selections.

For more information on available properties, see Response Chart: Properties (p. 111).

Using the 3D Contour Graph Response Chart

The 3D Contour Graph mode is a three-dimensional graphic that allows you to view how changes totwo inputs impact a single output. It displays a selected input on the X axis, a selected input on the Yaxis, and a selected output on the Z axis.

105Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 114: Design Exploration Users Guide

Viewing the 3-D Response Chart

1. In the Response Surface Outline view, select the Response chart cell.

2. In the Properties view, under Chart:

• Set the Mode property to 3D.

• Set chart resolution by entering values for the Chart Resolution Along X and Chart Resolution Along

Y properties.

3. In the Properties view, under Axes:

• For the X Axis property, select an input parameter.

• For the Y Axis property, select an input parameter.

• For the Z Axis property, select an output parameter.

The Response chart will automatically update according to your selections; a smooth three-dimensionalcontour of Z versus X and Y displays.

For more information on available properties, see Response Chart: Properties (p. 111).

Manipulating the 3D Response Chart

In the Chart view, the 3D contour can be rotated by clicking and dragging the mouse. Moving thecursor over the response surface shows the values of Z, X, and Y. The values of other input parameterscan also be adjusted in the Properties view using the sliders, showing different contours of Z. Addition-ally, the corresponding values of other output parameters can be seen in the Properties view; thus,instantaneous design and analysis is possible, leading to the generation of additional design points.

The triad control at the bottom left in the 3D Response chart view allows you to rotate the chart infreehand mode or quickly view the chart from a particular plane. To zoom in or out on any part of the

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.106

Using Response Surfaces

Page 115: Design Exploration Users Guide

chart, use the normal zoom controls (shift-middle mouse button, or scroll wheel). See Setting ChartProperties for details.

Using the 2D Slices Response Chart

The 2D Slices mode combines the benefits of the 2D and 3D Contour Graph modes by representing athree-dimensional surface in a two-dimensional format. It displays an input on the X axis, an input onthe Slice axis, and an output on the Y axis.

The value of the input on the Slice axis is calculated from the number of curves defined for the X andY axes.

• When at both of the inputs are continuous parameters, you specify the number of slices to be displayed.

• When one or both of the inputs are either discrete or continuous with Manufacturable Values, the numberof slices is determined by the number of levels defined for the input parameter(s).

Essentially, the first input on the X axis is varying continuously, while the number of curves or “slices”defined for the Slice axis represents the second input. Both inputs are then displayed on the XY plane,with regard to the output parameter on the Y axis. You can think of the 2D Slices chart as a “projection”of the 3D Response Surface curves onto a flat surface.

Viewing the 2D Slices Response Chart

To view a Response chart in the 2D Slices mode:

1. In the Response Surface Outline view, select the Response chart cell.

2. In the Properties view, under Chart:

• Set the Mode property to 2D Slices.

107Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 116: Design Exploration Users Guide

• Enter a value for the Chart Resolution Along X property.

• If displayed, enter a value for the Number of Slices property.

3. In the Properties view, under Axes:

• For the X Axis property, select an input parameter.

• For the Slice Axis property, select an input parameter.

• For the Y Axis property, select an output parameter.

The Response chart will automatically update according to your selections.

For more information on available properties, see Response Chart: Properties (p. 111).

2D Slices Rendering: Example

To illustrate how the 2D Slices chart is rendered, we’ll start with a 3D Response chart. In our example,both inputs are continuous parameters. Chart resolution properties default to 25, which means thereare 25 points on the X axis and 25 points on the Y axis.

Now, we’ll rotate the 3D image so the X axis is along the bottom edge of the chart, mirroring the per-spective of the 2D Slices chart for a better comparison.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.108

Using Response Surfaces

Page 117: Design Exploration Users Guide

Finally, we’ll switch the chart Mode to 2D Slices. When you compare the chart below to the earlier 3Dversion, you can see how the 2D Slices chart is actually a two-dimensional rendering of a three-dimen-sional image. From the example below, you can see the following things:

• Along the Y axis, there are 10 slices, corresponding to the value of the Number of Slices property.

• Along the X axis, each slice intersects with 25 points, corresponding to the value of the Chart Resolution

Along X property.

109Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 118: Design Exploration Users Guide

Response Chart: Example

This example illustrates using the three different modes of Response chart to view the impact of inputparameters on an output parameter.

We will work with the following example:

• DOE Type: Sparse Grid Initialization

• Input Parameter 1: WB_X (continuous, with range of -2; 2)

• Input Parameter 2: WB_Y (continuous with Manufacturable Values, with Manufacturable Values levelsof 1, 1.5, and 1.8, and a range of 1; 1.8)

• Output Parameter: WB_Rosenbrock

• Design Points: There are 6 design points on the X axis and 6 design points on the Y axis (since there isan input with Manufacturable Values, the number of slices is determined by the number of levels defined)

The images below show the initial Response chart in all three modes: 2D, 3D, and 2D Slices.

To display the design points from the DOE and the Response Surface refinement, select the Show

Design Points check box in the Properties view. The design points are superimposed on your chart.

When working with Manufacturable Values, you can improve chart quality by extending the parameterrange. Here, we’ll increase it by adding Manufacturable Values of -1 and 3, which become the upperand lower bounds.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.110

Using Response Surfaces

Page 119: Design Exploration Users Guide

To improve the quality of your chart further, you can increase the number of points used in building itby entering values in the Properties view. Below, we’ve increased the number of points on the X andY axes from 6 to 25.

Response Chart: Properties

To specify the properties of a Response chart, select Response in the Response Surface Outline viewand then edit the properties in the Properties view. Note that available properties vary according tothe chart mode and the type of parameters selected.

Chart Properties

Determines the properties of the chart.

• Display Full Parameter Name: Determines if the chart displays the full parameter name or the parameterID.

• Mode: Determines whether the chart is in 2D, 3D, or 2D Slices mode.

• Chart Resolution Along X: Determines the number of points on the X-axis. The number of points controlsthe amount of curvature that can be displayed. A minimum of 2 points is required and produces a straightline. A maximum of 100 points is allowed for maximum curvature. Defaults to 25.

• Chart Resolution Along Y: Determines the number of points on the Y-axis (3D and 2D Slices modes only).The number of points controls the amount of curvature that can be displayed. A minimum of 2 points isrequired and produces a straight line. A maximum of 100 points is allowed for maximum curvature. Defaultsto 25.

• Number of Slices: Determines the number of slices displayed in the 2D Slices chart.

• Show Design Points: Determines whether all of the design points currently in use (i.e., both in the Designof Experiments and from the Response Surface refinement) are used to build the Response Surface.

Axes Properties

Determines what data is displayed on each chart axis. For each axis, under Value, you can change whatthe chart displays on an axis by selecting an option from the drop-down.

• For the X Axis, available options are each of the input parameters enabled in the project.

• For the Y Axis, available options are as follows:

– For 2D mode, each of the output parameters in the project.

– For 3D mode, each of the input parameters enabled in the project.

111Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 120: Design Exploration Users Guide

– For 2D Slices mode, each of the output parameters in the project.

• For the Z Axis (3D mode only), available options are each of the output parameters in the project.

• For the Slice Axis (2D Slices mode only), available options are each of the input parameters enabled inthe project. Only available when both input parameters are continuous.

Input Parameters

Each of the input parameters is listed in this section. Under Value, you can change the value of eachinput parameter by moving the slider (for continuous parameters) or by entering a new value with thekeyboard (for discrete or Manufacturable Values parameters). The number to the right of the slide rep-resents the current value.

Output Parameters

Each of the output parameters is listed in this section. Under Value, you can view the interpolated valuefor each parameter.

Generic Chart Properties

You can modify various generic chart properties for this chart. See Setting Chart Properties for details.

Using the Spider Chart

Spider charts allow you to visualize the impact that changing the input parameter(s) has on all of theoutput parameters simultaneously. When you solve a Response Surface, a Spider chart will appear inthe Outline view for the default response point.

You can use the slider bars in the chart's Properties view to adjust the value for the input parameter(s)to visualize different designs. You can also enter specific values in the value boxes. The parameter legendbox at the top left in the Chart view allows you to select the parameter that is in the primary (top)position. Only the axis of the primary parameter will be labeled with values.

Using Local Sensitivity Charts

Local Sensitivity charts allow you to see the impact of continuous input parameters (both with andwithout Manufacturable Values) on output parameters. At the Response Surface level, sensitivity chartsare “Single Parameter Sensitivities.” This means that design exploration calculates the change of theoutput(s) based on the change of inputs independently, at the current value of each input parameter.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.112

Using Response Surfaces

Page 121: Design Exploration Users Guide

The larger the change of the output parameter(s), the more significant is the role of the input parametersthat were varied. As such, single parameter sensitivities are local sensitivities.

Types of Sensitivity Charts

DesignXplorer has two types of sensitivity charts: the standard Local Sensitivity chart and the Local

Sensitivity Curves chart.

• The Local Sensitivity chart is a powerful project-level tool, allowing you see at a glance the impact of all

the input parameters on output parameters. For details, see Using the Local Sensitivity Chart (p. 115).

• The Local Sensitivity Curves chart helps you to further focus your analysis by allowing you to view inde-pendent parameter variations within the standard Local Sensitivity chart. It provides a means of viewingthe impact of each input on specific outputs, given the current values of other parameters. For details,see Using the Local Sensitivity Curves Chart (p. 118).

When you solve a Response Surface, a Local Sensitivity chart and a Local Sensitivity Curves chart areautomatically added for the default response point in the Response Surface Outline view. To add an-other chart (this can be either an additional chart for the default response point or a chart for a differentresponse point), right-click the desired response point in the Outline view and select either Insert

Local Sensitivity or Insert Local Sensitivity Curves.

Manufacturable Values in Local Sensitivity Charts

Local sensitivity charts calculate sensitivities for continuous parameters and continuous parameters withManufacturable Values, but require that you have a least one parameter that is not a discrete parameter.If all of your input parameters are discrete, the Local Sensitivity and Local Sensitivity Curves charts arenot available.

For details on how continuous parameters with Manufacturable Values are represented on local sensit-ivity charts, see Understanding the Local Sensitivities Display (p. 113).

Discrete Parameters in Local Sensitivity Charts

Discrete parameters cannot be enabled as chart variables, but their impact on the output can still bedisplayed on the chart. When there is a discrete parameter in the project, the chart calculates the con-tinuous values and Manufacturable Values sensitivities given their specific combination with the discreteparameter values.

For discrete parameters, the Properties view includes a drop-down that is populated with the discretevalues defined for the parameter. By selecting different discrete values for each parameter, you canexplore the different sensitivities given different combinations of discrete values. The chart is updatedaccording to the changed parameter values. You can either check the sensitivities in a single chart, orcreate multiple charts to compare the different designs.

Understanding the Local Sensitivities Display

Local sensitivity charts display both continuous parameters and continuous parameters with Manufac-turable Values. This section illustrates how each type of sensitivity chart renders continuous values andManufacturable Values.

113Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 122: Design Exploration Users Guide

Local Sensitivity Curves Chart Display

On the Local Sensitivity Curves chart, continuous parameters are represented by colored curves, witha different color for each parameter.

For continuous parameters with Manufacturable Values, the continuous values are represented by atransparent gray curve, while the Manufacturable Values are represented by colored markers.

In the image below:

• Input parameters P2-D and P4-P (the yellow and blue curves) are continuous parameters.

• Input parameters P1-B and P3-L (the gray curves) are continuous with Manufacturable Values.

• The colored markers indicate the Manufacturable Values defined.

• The black markers indicate the location of the response point on each of the input curves.

Local Sensitivity Chart Display

On the Local Sensitivity chart, continuous parameters are represented by colored bars, with a differentcolor for each parameter.

For continuous parameters with Manufacturable Values, the continuous values are represented by agray bar, while the Manufacturable Values are represented by a colored bar in front of it. Each bar isdefined with the min/max extracted from the Manufacturable Values and the average calculated fromthe support curve. The min/max of the output can vary according to whether or not ManufacturableValues are used; in this case, both the colored bar and the gray bar for the input are visible on the chart.

Also, if the parameter range extends beyond the actual Manufacturable Values defined, the bar is toppedwith a gray line to indicate the sensitivity obtained while ignoring the Manufacturable Values.

In the image below:

• Input parameters P2-D and P4-P (the yellow and blue bars) are continuous parameters.

• Input parameters P1-B and P3-L (the red and green bars) are continuous with Manufacturable Values.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.114

Using Response Surfaces

Page 123: Design Exploration Users Guide

• The bars for inputs P1-B and P3-L show differences between the min-max of the output when Manufac-turable Values are used (the colored bar in front) versus when they are not (the gray bar in back, nowvisible).

Using the Local Sensitivity Chart

The Local Sensitivity chart can be a powerful exploration tool. For each output, it allows you to see theweight of the different input; it calculates the change of the output based on the change of each inputindependently, at the current value of each input parameter in the project. The Local Sensitivity chartcan be displayed as a bar chart or a pie chart.

115Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 124: Design Exploration Users Guide

By default, the Local Sensitivity chart shows the impact of all input parameters on all output parameters,but it is also possible specify the inputs and outputs to be considered. If you consider only one output,the resulting chart provides an independent sensitivity analysis for each single input parameter. Tospecify inputs and outputs, select or deselect the Enabled check box.

For more information on available properties, see Local Sensitivity Chart: Properties (p. 117).

Local Sensitivity Chart: Example

Below is an example of the Local Sensitivity chart displayed as a bar chart, plotted to show the impactof inputs P1-WB_Thickness and P2-WB_Radius on the output P4-WB_Deformation. An explanationof how the bar chart is built follows.

To determine the sensitivity of P1-WB_Thickness vs. P4-WB_Deformation, we:

1. Plot the response curve P4 = f(P1).

2. Compute(Max(P4)-Min(P4))/Avg(P4) ~= 2.3162.

3. If P4 increases while P1 increases, the sign is positive; otherwise it is negative.

In our example we get –2.3162, as shown below. This corresponds to the red bar for P1-WB_Thickness

in the Local Sensitivity chart.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.116

Using Response Surfaces

Page 125: Design Exploration Users Guide

Now we plot P4 = f(P2), (Max(P4)-Min(P4))/Avg(P4) ~= 1.3316, as shown below. This corresponds tothe blue bar for P2-WB_Radius in the Local Sensitivity chart.

On each of these curves, only one input is varying and the other input parameters are constant, butyou can change the value of any input and get an updated curve. This also applies to the standardLocal Sensitivity chart; all the sensitivity values are recalculated when you change the value of an inputparameter. If parameters are correlated, then you’ll see the sensitivity varying; in other words, the relativeweights of inputs may vary depending on the design.

Local Sensitivity Chart: Properties

To specify the properties of a local sensitivities chart, first select Local in the Response Surface Outline

view, and then edit the properties in the Properties view.

Chart Properties

• Display Full Parameter Name: Determines if the chart displays the full parameter name or the parameterID.

• Mode: Determines whether the chart is in Pie or Bar format.

117Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 126: Design Exploration Users Guide

Input Parameters

Each of the input parameters is listed in this section. For each input parameter:

• Under Value, you can change the value by moving the slider or entering a new value with the keyboard.The number to the right of the slide represents the current value.

• Under Enabled, you can enable or disable the parameter by selecting or deselecting the check box. Disabledparameters do not display on the chart.

Output Parameters

Each of the output parameters is listed in this section. For each output parameter:

• Under Value, you can view the interpolated value for the parameter.

• Under Enabled, you can enable or disable the parameter by selecting or deselecting the check box. Disabledparameters do not display on the chart.

Generic Chart Properties

You can modify various generic chart properties for this chart. See Setting Chart Properties for details.

Using the Local Sensitivity Curves Chart

The Local Sensitivity Curves chart helps you to focus your analysis by allowing you to view independentparameter variations within the standard Local Sensitivity chart. This multi-curve chart provides a meansof viewing the impact of each input parameter on specific outputs, given the current values of theother parameters. The Local Sensitivities Curves chart shows individual local sensitivities, with a separatecurve to represent the impact of each input on one or two outputs.

There are two types of Local Sensitivities Curves chart: Single Output and Dual Output.

• The Single Output version calculates the impact of each input parameter on a single output parameterof your choice.

• The Dual Output version calculates the impact of each input parameter on two output parameters ofyour choice.

Local Sensitivity Curves Chart: Single Output

By default, the chart opens to the Single Output version, as shown below. In this version:

• The X-axis is all selected inputs (normalized).

• The Y-axis is a single output parameter.

• Each curve represents the impact of an enabled input parameter on the selected output.

• For each curve, the current response point is indicated by a black point marker (all of the response pointshave equal Y-axis values).

• Continuous parameters with Manufacturable Values are represented by gray curves; the colored markersare the Manufacturable Values defined.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.118

Using Response Surfaces

Page 127: Design Exploration Users Guide

• Where impacts are the same for one or more inputs, the curves are superimposed, with curve of the inputdisplayed first on the list hiding the curves for the other inputs.

To change the output being considered, go to the Properties view and select a different output para-meter for the Y axis property.

Local Sensitivity Curves Chart: Dual Output

To view the Dual Output version, go to the Properties view and select an output parameter for boththe X axis and the Y axis properties. In this version:

• The X-axis is a selected output parameter.

• The Y-axis displays a selected output parameter.

• Each curve represents the impact of an enabled input parameter on the two selected outputs.

• The circle at the end of a curve represents the beginning of the curve (i.e., the lower bound of the inputparameter).

• For each curve, the current response point is indicated by a black point marker.

• Continuous parameters with Manufacturable Values are represented by gray curves; the colored markersare the Manufacturable Values defined.

• Where impacts are the same for one or more inputs, the curves are superimposed, with curve of the inputdisplayed first on the list hiding the curves for the other inputs.

To change the output(s) being considered, go to the Properties view and select a different outputparameter for one or both of the Axes properties.

119Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 128: Design Exploration Users Guide

For more information on available properties, see Local Sensitivity Curves Chart: Properties (p. 123).

Local Sensitivity Curves Chart: Example

This section explains how to interpret the Local Sensitivity Curves chart and illustrates how it is relatedto the Local Sensitivity chart for the same response point.

Below is an example of a Local Sensitivity bar chart for a given response point. This chart shows theimpact of four input parameters (P1-B, P2-D, P4-P, and P5-E) on two output parameters (P6-V and P8-

DIS).

On the left side of the chart, you can see how input parameters impact the output parameter P6-V:

• P2-D (the yellow bar) has the most impact and the impact is positive.

• P1-B (the red bar) has a moderate impact, and impact is positive.

• Input parameters P4-P and P5-E (the teal and blue bars) have no impact at all.

On the right side of the chart, you can the difference in how the same input parameters impact theoutput parameter P8-DIS:

• P2-D (the yellow bar) has the most impact, but the impact is negative.

• P1-B (the red bar) has a moderate impact, but the impact is negative.

• P4-P (the teal bar) has now has a moderate impact, and the impact is positive.

• P5-E (the blue bar) now has a moderate impact, and the impact is negative.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.120

Using Response Surfaces

Page 129: Design Exploration Users Guide

Single Output

When you view the Local Sensitivity Curves chart for the same response point, the chart defaults to theSingle Output version (i.e., it shows the impact of all enabled inputs on a single selected output). Theoutput parameter is on the Y-axis, with impact measured horizontally.

In the two examples below, you can see that the Single Output curve charts for output parameters P6-

V and P8-DIS show the same sensitivities as the Local Sensitivity bar chart.

Note

For output P6-V, inputs P4-P and P5-E have the same level of impact, so the blue line ishidden behind the teal line in the chart below.

Note

For output P8-DIS, inputs P1-B and P5-E have the same level of impact, so the blue line ishidden behind the red line in the chart below.

121Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 130: Design Exploration Users Guide

Dual Output

For more detailed information, you can view the Dual Output version of the Local Sensitivity Curveschart (i.e., it shows the impact of all enabled inputs on two selected outputs). In this particular example,there are only two output parameters. If there were six outputs in the project, however, you couldnarrow the focus of your analysis by selecting the two that are of most interest to you.

The curves chart below shows the impact of the same input parameters on both of the output parameterswe’ve been using. Output P6-V is on the X-axis, with impact measured vertically. Output P8-DIS on theY-axis, with impact measured horizontally. From this dual representation, you can see the followingthings:

• P2-D has the most significant impact on both outputs. The impact is positive for output P6-V, and negativefor output P8-DIS.

• P1-B has a moderate impact on both outputs. The impact is positive for output P6-V, and negative foroutput P8-DIS.

• P4-P has no impact on output P6-V and a moderate positive impact on output P8-DIS.

• P5-E has no impact on output P6-V and a moderate negative impact on output P8-DIS. Due to duplicateimpacts, its curve is hidden for both outputs.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.122

Using Response Surfaces

Page 131: Design Exploration Users Guide

Local Sensitivity Curves Chart: Properties

To specify the properties of a local sensitivities chart, first select Local Sensitivities Curves in the Re-sponse Surface Outline view, and then edit the properties in the Properties view.

Chart Properties

Determines the properties of the chart.

• Display Full Parameter Name: Determines if the chart displays the full parameter names or the parameternumber.

• Axis Range: Determines ranging of the output parameters axes (i.e., the lower and upper bounds of theaxes).

– If the Use Min Max of the Output Parameter option is selected, the axes bounds correspond to therange given by the Min-Max Search results. If the Min-Max Search object was disabled, the outputparameter bounds are determined from existing design points.

– If the Use Curves Data option is selected, the axes bounds match the ranges of the curves.

• Chart Resolution: Determines the number of points per curve. The number of points controls the amountof curvature that can be displayed. A minimum of 2 points is required and produces a straight line. Amaximum of 100 points is allowed for maximum curvature. Defaults to 25.

Axes Properties

Determines what data is displayed on each chart axis. For each axis, under Value, you can change whatthe chart displays on an axis by selecting an option from the drop-down.

• For the X-Axis, available options are Input Parameters and each of the output parameters defined in theproject. (If Input Parameters is selected, you are viewing a Single Output chart; otherwise, the chart isDual Output.)

• For the Y-Axis, available options are each of the output parameters defined in the project.

Input Parameters

Each of the input parameters is listed in this section. For each input parameter:

• Under Value, you can change the value by moving the slider or entering a new value with the keyboard.The number to the right of the slider represents the current value.

• Under Enabled, you can enable or disable the parameter by selecting or deselecting the check box. Disabledparameters do not display on the chart.

Note

Discrete input parameters display with their levels values associated to a response point, butcannot be enabled as chart variables.

123Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface Charts

Page 132: Design Exploration Users Guide

Output Parameters

Each of the output parameters is listed in this section. Under Value, you can view the interpolated valuefor each parameter.

Generic Chart Properties

You can modify various generic chart properties for this chart. See Setting Chart Properties for details.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.124

Using Response Surfaces

Page 133: Design Exploration Users Guide

Using Goal Driven Optimization

This section contains information about running a Goal Driven Optimization analysis. DesignXploreroffers two different types of Goal Driven Optimization systems: Response Surface Optimization andDirect Optimization.

A Response Surface Optimization system draws its information from its own Response Surface

component, and so is dependent on the quality of the response surface. The available optimizationmethods (Screening, MOGA, NLPQL, and MISQP) utilize response surface evaluations, rather than realsolves.

A Direct Optimization system is a single-component system which utilizes real solves. When used ina Direct Optimization context, the available optimization methods (Screening, , NLPQL, MISQP, AdaptiveSingle-Objective, and Adaptive Multiple-Objective) utilize real solves, rather than response surfaceevaluations.

A Direct Optimization system does not have an associated Response Surface component, but can drawits information from any other system or component that contains design point data. It is possible toreuse existing design point data, reducing the time needed for the optimization, without altering thesource of the design points. For example:

• You can transfer design point data from an existing Response Surface Optimization and improveupon the optimization without actually changing the original response surface.

• You can use information from a Response Surface that has been refined with the Kriging methodand validated. You can transfer the design point data to a Direct Optimization system and thenadjust the quality of the original response surface without affecting the attached direct optimization.

• You can transfer information from any DesignXplorer system or component containing design pointdata that has already been updated, saving time and resources by reusing existing, up-to-date datarather than reprocessing it.

Note

The transfer of design point data between two Direct Optimization systems is notsupported.

A Direct Optimization system also allows you to monitor its progress by watching the Table view.During a Direct Optimization, the Table view displays all the design points being calculated by theoptimization. The view is refreshed dynamically, allowing you to see how the optimization proceeds,how it converges, etc. Once the optimization is complete, the raw design point data is stored for future

125Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 134: Design Exploration Users Guide

reference. You can access the data by clicking the Raw Optimization Data node in the Outline viewto display in the Table view a listing of the design points that were calculated during the optimization.

Note

This list is compiled of raw design point data only; no analysis is applied and it doesnot show feasibility, ratings, Pareto fronts, etc. for the included points.

For more information, see Transferring Design Point Data for Direct Optimization (p. 127).

Related Topics:

Creating a Goal Driven Optimization SystemTransferring Design Point Data for Direct OptimizationGoal Driven Optimization MethodsDefining the Optimization DomainDefining Optimization Objectives and ConstraintsWorking with Candidate PointsGoal Driven Optimization Charts and Results

Creating a Goal Driven Optimization System

To create a Goal Driven Optimization system on your Project Schematic:

1. With the Project Schematic displayed, drag either a Response Surface Optimization or a Direct Op-

timization template from the Design Exploration area of the toolbox and drop it on the ProjectSchematic. You may drag it:

• directly under the Parameter Set bar or an existing system directly under the Parameter Set bar, inwhich case it will not share any data with any other systems in the Project Schematic.

• onto the Design of Experiments component of a system containing a Response Surface, in whichcase it will share all of the data generated by the DOE component.

• onto the Response Surface component of a system containing a Response Surface, in which case itwill share all of the data generated for the DOE and Response Surface components.

For more detailed information on data transfer, see Transferring Design Point Data for Direct Optim-ization (p. 127).

2. For a Response Surface Optimization system, if you are not sharing the DOE and Response Surface

cells, edit the DOE, set it up as described in Design of Experiments Component Reference (p. 29), andsolve both the DOE and Response Surface cells.

3. For a Direct Optimization system, if you have not already shared data via the options in Step 1, youhave the option of creating data transfer links to provide the system with design point data.

Note

If no design point data is shared, design point data will be generated automatically bythe update.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.126

Using Goal Driven Optimization

Page 135: Design Exploration Users Guide

4. On the Project Schematic, double-click on the Optimization cell of the new system to open the Optim-ization tab.

5. Specify the optimization method.

• In the Optimization tab Outline view, select the Optimization node.

• In the Properties view, select an optimization method and specify its optimization properties. Fordetails on the available optimization algorithms, see Goal Driven Optimization Methods (p. 128).

6. Specify optimization objectives or constraints.

• In the Optimization tab Outline view, select either the Objectives and Constraints node or an itemunderneath it.

• In the Table or Properties view, define the Optimization objectives and constraints. For details, seeDefining Optimization Objectives and Constraints (p. 153).

7. Specify the optimization domain.

• In the Optimization tab Outline view, select the Domain node or an input parameter or parameterrelationship underneath it.

• In the Table or Properties view, define the selected domain object. For details, see Defining the Op-timization Domain (p. 148).

8. Click the Update toolbar button.

Transferring Design Point Data for Direct Optimization

In DesignXplorer, you can transfer design point data to a Direct Optimization system by creating oneor more data transfer links from a component containing design point data to the system’s Optimization

component.

Note

The transfer of design point data between two Direct Optimization systems is not supported.

Data Transferred

The data transferred consists of all design points that have been obtained by a real solution; designpoints obtained from the evaluation of a response surface are not transferred. Design point data istransferred according to the nature of its source component.

• Design of Experiments component: All points, including those with custom output values.

• Response Surface component: All refinement and verification points used to create or evaluate fromthe Response Surface, since these are obtained by a real solution.

Note

The points from the DOE are not transferred; to transfer these points, create a datatransfer link from the DOE component.

127Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Transferring Design Point Data for Direct Optimization

Page 136: Design Exploration Users Guide

• Parameters Correlation component (standalone): All points.

• Parameters Correlation component (linked to a Response Surface component): No points, becauseall of them are obtained from a response surface evaluation, rather than a real solve.

Data Usage

Once transferred, the design point data is stored in an initial sample set, as follows:

• If the Direct Optimization method is NLPQL or MISQP, the samples are not used.

• If the Direct Optimization method is Adaptive Single-Optimization, the initial sample set is filled withtransferred points and additional points are generated to reach the requested number of samples andthe workflow of LHS.

• If the Direct Optimization method is or Adaptive Multiple-Objective, the initial sample set is filled withtransferred points and additional points are generated to ready the requested number of samples.

The transferred points are added to the samples generated to initiate the optimization. For example, ifyou have requested 100 samples and 15 points are transferred, a total 115 samples will be available toinitiate the optimization.

When there are duplicates between the transferred points and the initial sample set, the duplicates areremoved. For example, if you have requested 100 samples, 15 points are transferred, and 6 duplicatesare found, a total of 109 samples will be available to initiate the optimization.

For more information on data transfer links, see Project Schematic Links in the Workbench User's Guide.

Goal Driven Optimization Methods

DesignXplorer offers the following Goal Driven Optimization methods:

Direct Optim-

ization

Response Sur-

face Optimiza-

tion

DescriptionMethod

XXShifted-HammersleySampling

Screening

XXNonlinear Programmingby Quadratic Lagrangian

NLPQL

XXMixed-Integer SequentialQuadratic Programming

MISQP

XXMulti-Objective GeneticAlgorithm

X

Hybrid optimizationmethod using Optimal

Adaptive Single-Objective

Space-Filling Design, aKriging response surface,MISQP, and domain reduc-tion in a Direct Optimiza-tion system

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.128

Using Goal Driven Optimization

Page 137: Design Exploration Users Guide

X

Hybrid optimizationmethod using a Kriging

Adaptive Mul-tiple-Objective

response surface and in aDirect Optimization sys-tem

Availability is determined by the optim-izer, as defined in the optimization exten-

sion.

External optimizer as definedin a loaded optimization exten-sion

External Optim-izer

The table below shows the general capabilities of each method:

Para-

met-

Man-

u-

Dis-

crete

Glob-

al

Search

Loc-

al

Search

Mul-

tiple

Ob-

Single

Ob-

ject-

ive

Method

er

Rela-

fac-

tur-ject-

ives tion-

ships

able

Val-

ues

XXXXXScreen-ing

XXXNLPQL

XXXXXMISQP

XXXXX

XXX

Adapt-iveSingle-Object-ive

XXXX

Adapt-ive Mul-tiple-Object-ive

Capabilities are determined by the optimizer, as defined in theoptimization extension.

ExternalOptim-izer

The optimization method for a design study is selected via the drop down menu for the Method Name

property (in the Properties view of the Optimization tab). DesignXplorer filters the Method Name listfor applicability to the current project—i.e., it displays only those optimization methods that can beused to solve the optimization problem as it is currently defined. For example, if your project has multipleobjectives defined and an external optimizer does not support multiple objectives, the optimizer willnot be included in the option list. When no objectives or constraints are defined for a project, all optim-ization methods are displayed in the Method Name drop down. If you already know that you want touse a particular external optimizer, it is recommended that you select it as the method before settingup the rest of the project. Otherwise, the optimization method could be inadvertently filtered from thelist.

The following sections give instructions for selecting and specifying the optimization properties foreach algorithm.

129Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Methods

Page 138: Design Exploration Users Guide

Performing a Screening OptimizationPerforming a MOGA OptimizationPerforming an NLPQL OptimizationPerforming an MISQP OptimizationPerforming an Adaptive Single-Objective OptimizationPerforming an Adaptive Multiple-Objective OptimizationPerforming an Optimization with an External Optimizer

Performing a Screening Optimization

The Screening option can be used for both Response Surface Optimization and Direct Optimization. Itallows you to generate a new sample set and sort its samples based on objectives and constraints. Itis a non-iterative approach that is available for all types of input parameters. Usually the Screening ap-proach is used for preliminary design, which may lead you to apply the or NLPQL options for more refinedoptimization results. See the Principles (GDO) (p. 225) theory section for more information.

To perform a Screening optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Properties view:

• Method Name: Set to Screening.

• Number of Samples: Enter the number of samples to generate for the optimization. Note that samplesare generated very rapidly from the response surface and do not require an actual solve of any designpoints.

Must be greater than or equal to the number of enabled input and output parameter; the numberof enabled parameters is also the minimum number of samples required to generate the Sensit-ivities chart. You can enter a minimum of 2 and a maximum of 10,000. Defaults to 1000 for aResponse Surface Optimization and 100 for a Direct Optimization.

• Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidates to begenerated by the algorithm. For details, see Viewing and Editing Candidate Points in the TableView (p. 158).

• Verify Candidate Points: Select to verify candidate points automatically at end of an update for aResponse Surface Optimization; this property is not applicable to Direct Optimization.

3. Specify optimization objective and constraints.

• In the Optimization tab Outline view, select the Objectives and Constraints node or an item under-neath it.

• In the Table or Properties view, define the Optimization objectives and constraints. For details, seeDefining Optimization Objectives and Constraints (p. 153).

4. Specify the optimization domain.

• In the Optimization tab Outline view, select the Domain node or an input parameter or parameterrelationship underneath it.

• In the Table or Properties view, define the selected domain object. For details, see Defining the Op-timization Domain (p. 148).

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.130

Using Goal Driven Optimization

Page 139: Design Exploration Users Guide

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and selectthe Update option.

The result is a group of points or “sample set.” The points that are most in alignment with the ob-jectives and constraints are displayed in the table as the Candidate Points for the optimization.The result also displays the read-only field Size of Generated Sample Set in the Properties view.

For a Screening Response Surface optimization, the sample points obtained from a response surfaceMin-Max search are automatically added to the sample set used to initialize or run the optimization.For example, if the Min-Max search results in 4 points and the user runs a Screening optimizationwith Number of Samples set to 100, the final optimization sample set will contain up to 104 points.If a point found by the Min-Max search is also contained in the initial Screening sample set, it is onlycounted once in the final sample set.

Note

When constraints exist, the Tradeoff chart will indicate which samples are feasible (meetthe constraints) or are infeasible (do not meet the constraints). There is a display optionfor the infeasible points.

6. Select the Optimization node in the Outline view and review optimization details in the Optimization

Status section of the Properties view. The following output properties are displayed:

• Number of Evaluations: Number of design point evaluations performed. This value takes all pointsused in the optimization, including design points pulled from the cache. Can be used to measure theefficiency of the optimization method to find the optimum design point.

• Number of Failures: Number of failed design points for the optimization. When design points fail, aScreening optimization does not attempt to solve additional design points in their place and does notinclude them on the Direct Optimization Samples chart.

• Size of Generated Sample Set: Indicates the number of samples generated in the sample set. ForDirect Optimization, this is the number of samples successfully updated. For Response Surface Optim-ization, this is the number of samples successfully updated plus the number of different (non-duplicate)samples generated by the Min-Max Search (if enabled).

Number of Candidates: Number of candidates obtained. (This value is limited by the Maximum

Number of Candidates input property.)

7. In the Outline view, you can select the Domain node or any object underneath it to review domaindetails in the Properties and Table views.

8. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view to displaya listing of the design points that were calculated during the optimization in the Table view. If the rawoptimization data point exists in the Design Point Parameter Set, then the corresponding design pointname is indicated in parentheses under the Name column.

Note

This list is compiled of raw data and does not show feasibility, ratings, etc. for the includeddesign points.

131Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Methods

Page 140: Design Exploration Users Guide

Performing a MOGA Optimization

The MOGA (Multi-Objective Genetic Algorithm) option can be used for both Response Surface Optim-ization and Direct Optimization. It allows you to generate a new sample set or use an existing set forproviding a more refined approach than the Screening method. It is available for all types of inputparameters and can handle multiple goals. See the Multi-Objective Genetic Algorithm (MOGA) (p. 246)theory section for more information.

Note

To display advanced options, go to Tools > Options > Design Exploration > Show Advanced

Options.

To perform a MOGA optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Properties view:

• Method Name: Set to MOGA.

• Type of Initial Sampling: Advanced option that allows you to generate different kinds of sampling.If you do not have any parameter relationships defined, specify Screening or OSF. Defaults toScreening. If you have parameter relationships defined, the initial sampling must be performed bythe constrained sampling algorithms (because parameter relationships constrain the sampling) andthis property is automatically set to Constrained Sampling.

• Random Generator Seed: Advanced option that displays only when Type of Initial Sampling is set toOSF. Specify the value used to initialize the random number generator invoked internally by the OSFalgorithm. The value must be a positive integer. This property allows you to generate different samplings(by changing the value) or to regenerate the same sampling (by keeping the same value). Defaults to1.

• Maximum Number of Cycles: Advanced option that displays only when Type of Initial Sampling isset to OSF. Determines the number of optimization loops the algorithm needs, which in turns determinesthe discrepancy of the OSF. The optimization is essentially combinatorial, so a large number of cycleswill slow down the process; however, this will make the discrepancy of the OSF smaller. The valuemust be greater than 0. For practical purposes, 10 cycles is usually good for up to 20 variables. Defaultsto 10.

• Number of Initial Samples: Specify the initial number of samples to be used. Must be greater thanor equal to the number of enabled input and output parameters. The minimum recommended numberof initial samples is 10 times the number of input parameters; the larger the initial sample set, thebetter your chances of finding the input parameter space that contains the best solutions.

Must be greater than or equal to the number of enabled input and output parameters; the numberof enabled parameters is also the minimum number of samples required to generate the Sensit-ivities chart. You can enter a minimum of 2 and a maximum of 10,000. Defaults to 100.

If you are switching from the Screening method to the MOGA method, MOGA generates a newsample set. For the sake of consistency, enter the same number of initial samples as used for theScreening optimization.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.132

Using Goal Driven Optimization

Page 141: Design Exploration Users Guide

• Number of Samples Per Iteration: The number of samples that are iterated and updated with eachiteration. This setting must be greater than or equal to the number of enabled input and outputparameters, but less than or equal to the number of initial samples. Defaults to 100 for a ResponseSurface Optimization and to 50 for a Direct Optimization.

You can enter a minimum of 2 and a maximum of 10,000.

• Maximum Allowable Pareto Percentage: Convergence criterion. A percentage which represents theratio of the number of desired Pareto points to the Number of Samples Per Iteration. When this per-centage is reached, the optimization is converged. For example, a value of 70% with Number of SamplesPer Iteration set at 200 samples would mean that the optimization should stop once the resultingfront of the MOGA optimization contains at least 140 points. Of course, the optimization stops beforethat if the Maximum Number of Iterations is reached.

If the Maximum Allowable Pareto Percentage is too low (below 30) then the process may convergeprematurely, and if the value is too high (above 80) it may converge slowly. The value of thisproperty depends on the number of parameters and the nature of the design space itself. Defaultsto 70. Using a value between 55 and 75 works best for most problems. For details, see ConvergenceCriteria in MOGA-Based Multi-Objective Optimization (p. 245).

• Convergence Stability Percentage: Convergence criterion. A percentage which represents the stabilityof the population, based on its mean and standard deviation. Allows you to minimize the number ofiterations performed while still reaching the desired level of stability. When this percentage is reached,the optimization is converged. Defaults to 2%. To not take the convergence stability into account, setto 0%. For details, see Convergence Criteria in MOGA-Based Multi-Objective Optimization (p. 245).

• Maximum Number of Iterations: Stop criterion. The maximum possible number of iterations the al-gorithm executes. If this number is reached without the optimization having reached convergence,iteration will stop. This also provides an idea of the maximum possible number of function evaluationsthat are needed for the full cycle, as well as the maximum possible time it may take to run the optim-ization. For example, the absolute maximum number of evaluations is given by: Number of InitialSamples + Number of Samples Per Iteration * (Maximum Number of Iterations - 1).

• Mutation Probability: Advanced option allowing you to specify the probability of applying a mutationon a design configuration. The value must be between 0 and 1. A larger value indicates a more randomalgorithm; if the value is 1, the algorithm becomes a pure random search. A low probability of mutation(<0.2) is recommended. Defaults to 0.01. For more information on mutation, see MOGA Steps toGenerate a New Population (p. 248).

• Crossover Probability: Advanced option allowing you to specify the probability with which parentsolutions are recombined to generate the offspring solutions. The value must be between 0 and 1. Asmaller value indicates a more stable population and a faster (but less accurate) solution; if the valueis 0, then the parents are copied directly to the new population. A high probability of crossover (>0.9)is recommended. Defaults to 0.98.

• Type of Discrete Crossover: Advanced option allowing you to determine the kind of crossover fordiscrete parameters. Three crossover types are available: One Point, Two Points, or Uniform. Accordingto the type of crossover selected, the children will be closer to or farther from their parents (closer forOne Point and farther for Uniform). Defaults to One Point. For more information on crossover, seeMOGA Steps to Generate a New Population (p. 248).

• Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidates to begenerated by the algorithm. For details, see Viewing and Editing Candidate Points in the TableView (p. 158).

133Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Methods

Page 142: Design Exploration Users Guide

• Verify Candidate Points: Select to verify candidate points automatically at end of the Optimizationupdate.

3. Specify optimization objectives and constraints.

• In the Optimization tab Outline view, select the Objectives and Constraints node or an item under-neath it.

• In the Table or Properties view, define the Optimization Objectives. For details, see Defining Optim-ization Objectives and Constraints (p. 153).

Note

For the MOGA method, at least one output parameter must have an objective defined.Multiple objectives are allowed.

4. Specify the optimization domain.

• In the Optimization tab Outline view, define the domain input parameters by enabling the desiredinput parameters under the Domain node.

• In the Table or Properties view, define the selected domain object. For details, see Defining the Op-timization Domain (p. 148).

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and selectthe Update option.

The result is a group of points or “sample set”. The points that are most in alignment with the ob-jectives are displayed in the table as the Candidate Points for the optimization. The result alsodisplays the read-only field Size of Generated Sample Set in the Properties view.

6. Select the Optimization node in the Outline view and review convergence details in the Optimization

Status section of the Properties view. The following output properties are displayed:

• Converged: Indicates whether the optimization has converged.

• Obtained Pareto Percentage: A percentage which represents the ratio of the number of Pareto pointsobtained by the optimization.

• Number of Iterations: Number of iterations executed. Each iteration corresponds to the generationof a population.

• Number of Evaluations: Number of design point evaluations performed. This value takes all pointsused in the optimization, including design points pulled from the cache. Can be used to measure theefficiency of the optimization method to find the optimum design point.

• Number of Failures: Number of failed design points for the optimization. When a design point fails,a MOGA optimization does not retain this point in the Pareto front and does not attempt to solveanother design point in its place. Failed design points are also not included on the Direct OptimizationSamples chart.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.134

Using Goal Driven Optimization

Page 143: Design Exploration Users Guide

• Size of Generated Sample Set: Number of samples generated in the sample set. This is the numberof samples successfully updated for the last population generated by the algorithm; usually equalsthe Number of Samples Per Iteration.

• Number of Candidates: Number of candidates obtained. (This value is limited by the Maximum

Number of Candidates input property.)

7. In the Outline view, you can select the Domain node or any object underneath it to review domaindetails in the Properties and Table views.

8. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view to displaya listing of the design points that were calculated during the optimization in the Table view. If the rawoptimization data point exists in the Design Point Parameter Set, then the corresponding design pointname is indicated in parentheses under the Name column.

Note

Note that this list is compiled of raw data and does not show feasibility, ratings, etc. forthe included design points.

Performing an NLPQL Optimization

The NLPQL (Nonlinear Programming by Quadratic Lagrangian) option can be used for both ResponseSurface Optimization and Direct Optimization. It allows you to generate a new sample set to provide amore refined approach than the Screening method. It is available for continuous input parameters onlyand can only handle one output parameter goal (other output parameters can be defined as constraints).See the Nonlinear Programming by Quadratic Lagrangian (NLPQL) (p. 233) theory section for more in-formation.

Note

In some cases, particularly for Direct Optimization problems and simulations with a greatdeal of noise, the Epsilon for the NLPQL optimization method is not large enough to getabove simulation noise. In these cases, it is recommended that you try the Adaptive Single-Objective (ASO) optimization method instead.

To generate samples and perform a NLPQL optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Properties view:

• Method Name: Set to NLPQL.

• Allowable Convergence Percentage: The tolerance to which the Karush-Kuhn-Tucker (KKT) optimalitycriterion is generated during the NLPQL process. A smaller value indicates more convergence iterationsand a more accurate (but slower) solution. A larger value indicates less convergence iterations and aless accurate (but faster) solution. The typical default is 1.0e-06, which is consistent across all problemtypes since the inputs, outputs, and the gradients are scaled during the NLPQL solution.

135Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Methods

Page 144: Design Exploration Users Guide

• Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidates to begenerated by the algorithm. For details, see Viewing and Editing Candidate Points in the TableView (p. 158).

• Derivative Approximation: When analytical derivatives are not available, the NLPQL method approx-imates them numerically. This property allows you to specify the method of approximating the gradientof the objective function. Select one of the following options:

– Central Difference: The central difference method is used to calculate output derivatives. Thismethod increases the accuracy of the gradient calculations, but doubles the number of design pointevaluations. Default method for pre-existing databases and new Response Surface Optimizationsystems.

– Forward Difference: The forward difference method is used to calculate output derivatives. Thismethod uses fewer design point evaluations, but decreases the accuracy of the gradient calculations.Default method for new Direct Optimization systems.

Maximum Number of Iterations: The maximum possible number of iterations the algorithm ex-ecutes. If convergence happens before this number is reached, the iterations will cease. This alsoprovides an idea of the maximum possible number of function evaluations that are needed forthe full cycle, as well as the maximum possible time it may take to run the optimization. ForNLPQL, the number of evaluations can be approximated according to the Derivative Approxim-

ation gradient calculation method, as follows:

– For Central Difference: number of iterations * (2*number of inputs +1)

– For Forward Difference: number of iterations * (number of inputs+1)

3. Specify optimization objectives and constraints.

• In the Optimization tab Outline view, select the Objectives and Constraints node or an item under-neath it.

• In the Table or Properties view, define the Optimization Objectives. For details, see Defining Optim-ization Objectives and Constraints (p. 153).

Note

For the NLPQL method, exactly one output parameter must have an objective defined.

4. Specify the optimization domain.

• In the Optimization tab Outline view, select the Domain node or an input parameter or parameterrelationship underneath it.

• In the Table or Properties view, define the selected domain object. For details, see Defining the Op-timization Domain (p. 148).

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and selectthe Update option.

The result is a group of points or “sample set.” The points that are most in alignment with the ob-jective are displayed in the table as the Candidate Points for the optimization. The result also displays

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.136

Using Goal Driven Optimization

Page 145: Design Exploration Users Guide

the read-only field Size of Generated Sample Set in the Properties view. This value is always equalto 1 for an NLPQL result.

6. Select the Optimization node in the Outline view and review convergence details in the Optimization

Status section of the Properties view. The following output properties are displayed:

• Converged: Indicates whether the optimization has converged.

• Number of Iterations: Number of iterations executed. Each iteration corresponds to one formulationand solution of the quadratic programming sub-problem, or alternatively, one evaluation of gradients.

• Number of Evaluations: Number of design point evaluations performed. This value takes all pointsused in the optimization, including design points pulled from the cache. Can be used to measure theefficiency of the optimization method to find the optimum design point.

• Number of Failures: Number of failed design points for the optimization. When a design point fails,an NLPQL optimization changes the direction of its search; it does not attempt to solve an additionaldesign point in its place and does not include it on the Direct Optimization Samples chart.

• Size of Generated Sample Set: Number of samples generated in the sample set. This is the numberof iteration points obtained by the optimization and should be equal to the number of iterations.

• Number of Candidates: Number of candidates obtained. (This value is limited by the Maximum

Number of Candidates input property.)

7. In the Outline view, you can select the Domain node or any object underneath it to review domaindetails in the Properties and Table views.

8. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view to displaya listing of the design points that were calculated during the optimization in the Table view. If the rawoptimization data point exists in the Design Point Parameter Set, then the corresponding design pointname is indicated in parentheses under the Name column.

Note

Note that this list is compiled of raw data and does not show feasibility, ratings, etc. forthe included design points.

Performing an MISQP Optimization

The MISQP (Mixed-Integer Sequential Quadratic Programming) option can be used for both ResponseSurface Optimization and Direct Optimization. It allows you to generate a new sample set to provide amore refined approach than the Screening method. It is available for both continuous and discrete inputparameters, and can only handle one output parameter goal (other output parameters can be definedas constraints).

To generate samples and perform a MISQP optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Properties view:

• Method Name: Set to MISQP.

137Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Methods

Page 146: Design Exploration Users Guide

• Allowable Convergence Percentage: The tolerance to which the Karush-Kuhn-Tucker (KKT) optimalitycriterion is generated during the MISQP process. A smaller value indicates more convergence iterationsand a more accurate (but slower) solution. A larger value indicates less convergence iterations and aless accurate (but faster) solution. The typical default is 1.0e-06, which is consistent across all problemtypes since the inputs, outputs, and the gradients are scaled during the MISQP solution.

• Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidates to begenerated by the algorithm. For details, see Viewing and Editing Candidate Points in the TableView (p. 158).

• Derivative Approximation: When analytical derivatives are not available, the MISQP method approx-imates them numerically. This property allows you to specify the method of approximating the gradientof the objective function. Select one of the following options:

– Central Difference: The central difference method is used to calculate output derivatives. Thismethod increases the accuracy of the gradient calculations, but doubles the number of design pointevaluations. Default method for preexisting databases and new Goal Driven Optimization systems.

– Forward Difference: The forward difference method is used to calculate output derivatives. Thismethod uses fewer design point evaluations, but decreases the accuracy of the gradient calculations.Default method for new Direct Optimization systems.

Maximum Number of Iterations: The maximum possible number of iterations the algorithm ex-ecutes. If convergence happens before this number is reached, the iterations will cease. This alsoprovides an idea of the maximum possible number of function evaluations that are needed forthe full cycle, as well as the maximum possible time it may take to run the optimization. ForMISQP, the number of evaluations can be approximated according to the Derivative Approxim-

ation gradient calculation method, as follows:

– For Central Difference: number of iterations * (2*number of inputs +1)

– For Forward Difference: number of iterations * (number of inputs+1)

• Verify Candidate Points: Select to verify candidate points automatically at end of the Optimizationupdate.

3. Specify optimization objectives and constraints.

• In the Optimization tab Outline view, select the Objectives and Constraints node or an item under-neath it.

• In the Table or Properties view, define the Optimization Objectives. For details, see Defining Optim-ization Objectives and Constraints (p. 153).

Note

For the MISQP method, exactly one output parameter must have an objective defined.

4. Specify the optimization domain.

• In the Optimization tab Outline view, select the Domain node or an input parameter underneath it.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.138

Using Goal Driven Optimization

Page 147: Design Exploration Users Guide

• In the Table or Properties view, define the Optimization Domain. For details, see Defining the Op-timization Domain (p. 148).

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and selectthe Update option.

The result is a group of points or “sample set.” The points that are most in alignment with the ob-jective are displayed in the table as the Candidate Points for the optimization. The result also displaysthe read-only field Size of Generated Sample Set in the Properties view. This value is always equalto 1 for an MISQP result.

6. Select the Optimization node in the Outline view and review convergence details in the Optimization

Status section of the Properties view. The following output properties are displayed:

• Converged: Indicates whether the optimization has converged.

• Number of Iterations: Number of iterations executed. Each iteration corresponds to one formulationand solution of the quadratic programming sub-problem, or alternatively, one evaluation of gradients.

• Number of Evaluations: Number of design point evaluations performed. This value takes all pointsused in the optimization, including design points pulled from the cache. Can be used to measure theefficiency of the optimization method to find the optimum design point.

• Number of Failures: Number of failed design points for the optimization. When a design point fails,an MISQP optimization changes the direction of its search; it does not attempt to solve an additionaldesign point in its place and does not include it on the Direct Optimization Samples chart.

• Size of Generated Sample Set: Number of samples generated in the sample set. This is the numberof design points updated in the last iteration.

• Number of Candidates: Number of candidates obtained. (This value is limited by the Maximum

Number of Candidates input property.)

7. In the Outline view, you can select the Domain node or any object underneath it to review domaindetails in the Properties and Table views.

8. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view to displaya listing of the design points that were calculated during the optimization in the Table view. If the rawoptimization data point exists in the Design Point Parameter Set, then the corresponding design pointname is indicated in parentheses under the Name column.

Note

Note that this list is compiled of raw data and does not show feasibility, ratings, etc. forthe included design points.

Performing an Adaptive Single-Objective Optimization

The Adaptive Single-Objective (OSF + Kriging + MISQP with domain reduction) option can be usedonly for Direct Optimization systems. This gradient-based method employs automatic intelligent refine-ment to provide the global optima. It requires a minimum number of design points to build the Krigingresponse surface, but in general, reduces the number of design points necessary for the optimization.Failed design points are treated as inequality constraints, making it fault-tolerant.

139Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Methods

Page 148: Design Exploration Users Guide

The Adaptive Single-Objective method is available for input parameters that are continuous and con-tinuous with Manufacturable Values. It can handle only one output parameter goal (other outputparameters can be defined as constraints). It does not support the use of parameter relationships inthe optimization domain. See the Adaptive Single-Objective Optimization (ASO) (p. 241) theory sectionfor details.

Note

To display advanced options, go to Tools > Options > Design Exploration > Show Advanced

Options.

To generate samples and perform an Adaptive Single-Objective optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Optimization section of the Properties view:

• Method Name: Set to Adaptive Single-Objective.

• Number of Initial Samples: The number of samples generated for the initial Kriging and after all domainreductions for the construction of the next Kriging.

You can enter a minimum of (NbInp+1)*(NbInp+2)/2 (also the minimum number of OSF samplesrequired for the Kriging construction) or a maximum of 10,000. Defaults to (NbInp+1)*(NbInp+2)/2

for Direct Optimization (there is no default for Response Surface Optimization).

Because of the Adaptive Single-Objective workflow (in which a new OSF sample set is generatedafter each domain reduction), increasing the number of OSF samples does not necessarily improvethe quality of the results and significantly increases the number of evaluations.

• Random Generator Seed: Advanced option allowing you to specify the value used to initialize therandom number generator invoked internally by the OSF algorithm. The value must be a positive integer.This property allows you to generate different samplings (by changing the value) or to regenerate thesame sampling (by keeping the same value). Defaults to 1.

• Maximum Number of Cycles: Determines the number of optimization loops the algorithm needs,which in turns determines the discrepancy of the OSF. The optimization is essentially combinatorial,so a large number of cycles will slow down the process; however, this will make the discrepancy ofthe OSF smaller. The value must be greater than 0. For practical purposes, 10 cycles is usually goodfor up to 20 variables. Defaults to 10.

• Number of Screening Samples: The number of samples for the screening generation on the currentKriging. This value is used to create the next Kriging (based on error prediction) and verified candidates.

You can enter a minimum of (NbInp+1)*(NbInp+2)/2 (also the minimum number of OSF samplesrequired for the Kriging construction) or a maximum of 10,000. Defaults to 100*NbInp for DirectOptimization (there is no default for Response Surface Optimization).

The larger the screening sample set, the better the chances of finding good verified points.However, too many points can result in a divergence of the Kriging.

• Number of Starting Points: Determines the number of local optima to be explored; the larger thestarting points set, the more local optima will be explored. In the case of a linear surface, for example,

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.140

Using Goal Driven Optimization

Page 149: Design Exploration Users Guide

it is not necessary to use many points. This value must be less than the Number of Screening samplesbecause they are selected in this sample. Defaults to the Number of Initial Samples.

• Maximum Number of Evaluations: Stop criterion. The maximum possible number of evaluations(design points) to be calculated by the algorithm. If convergence occurs before this number is reached,evaluations will cease. This value also provides an idea of the maximum possible time it takes to runthe optimization. Defaults to 20*(NbInp +1).

• Maximum Number of Domain Reductions: Stop criterion. The maximum possible number of domainreductions for input variation. (No information is known about the size of the reduction beforehand).Defaults to 20.

• Percentage of Domain Reductions: Stop criterion. The minimum size of the current domain accordingto the initial domain. For example, with one input ranging between 0 and 100, the domain size isequal to 100. The percentage of domain reduction is 1%, so current working domain size cannot beless than 1 (such as an input ranging between 5 and 6). Defaults to 0.1.

• Convergence Tolerance: Stop criterion. The minimum allowable gap between the values of two suc-cessive candidates. If the difference between two successive candidates is smaller than the ConvergenceTolerance value, the algorithm is stopped. A smaller value indicates more convergence iterations anda more accurate (but slower) solution. A larger value indicates fewer convergence iterations and a lessaccurate (but faster) solution. Defaults to 1E-06.

• Retained Domain per Iteration (%): Advanced option that allows you to specify the minimum per-centage of the domain you want to keep after a domain reduction. The value must be between 10%and 90%. A larger value indicates less domain reduction, which implies better exploration but a slowersolution. A smaller value indicates a faster (and more accurate) solution, with the risk of it being alocal one. Defaults to 40 %.

• Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidates to begenerated by the algorithm. For details, see Viewing and Editing Candidate Points in the TableView (p. 158).

3. Specify optimization objectives and constraints.

• In the Optimization tab Outline view, select the Objectives and Constraints node or an item under-neath it.

• In the Table or Properties view, define the Optimization Objectives. For details, see Defining Optim-ization Objectives and Constraints (p. 153).

Note

• For the Adaptive Single-Objective method, exactly one output parameter must have anobjective defined.

• After you have defined an objective, a warning icon in the Message column of the Outline

view means that you have exceeded the recommended number of input parameters. Formore information, see Number of Input Parameters for DOE Types (p. 69).

4. Specify the optimization domain.

• In the Optimization tab Outline view, select the Domain node or an input parameter underneath it.

141Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Methods

Page 150: Design Exploration Users Guide

• In the Table or Properties view, define the Optimization Domain. For details, see Defining the Op-timization Domain (p. 148).

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and selectthe Update context option.

The result is a group of points or “sample set.” The points that are most in alignment with the ob-jectives are displayed in the table as the Candidate Points for the optimization. The result alsodisplays the read-only field Size of Generated Sample Set in the Properties view.

6. Select the Optimization node in the Outline view and review convergence details in the OptimizationStatus section of the Properties view. The following output properties are displayed:

• Converged: Indicates whether the optimization has converged.

• Number of Evaluations: Number of design point evaluations performed. This value takes into accountall points used in the optimization, including design points pulled from the cache. It corresponds tothe total of LHS points and verification points.

• Number of Domain Reductions: Number of times the domain is reduced.

• Number of Failures: Number of failed design points for the optimization. When a design point fails,an Adaptive Single-Object optimization changes the direction of its search; it does not attempt tosolve an additional design point in its place and does not include it on the Direct Optimization Sampleschart.

• Size of Generated Sample Set: Number of samples generated in the sample set. This is the numberof unique design points successfully updated.

• Number of Candidates: Number of candidates obtained. (This value is limited by the Maximum

Number of Candidates input property.)

7. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view to displaya listing of the design points that were calculated during the optimization in the Table view. If the rawoptimization data point exists in the Design Point Parameter Set, then the corresponding design pointname is indicated in parentheses under the Name column.

Note

Note that this list is compiled of raw data and does not show feasibility, ratings, etc. forthe included design points.

Performing an Adaptive Multiple-Objective Optimization

The Adaptive Multiple-Objective (Kriging + MOGA) option can be used only for Direct Optimizationsystems. It is an iterative algorithm that allows you to either generate a new sample set or use an existingset, providing a more refined approach than the Screening method. It uses the same general approachas MOGA, but applies the Kriging error predictor to reduce the number of evaluations needed to findthe global optimum.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.142

Using Goal Driven Optimization

Page 151: Design Exploration Users Guide

The Adaptive Multiple-Objective method is only available for continuous input parameters and continuousinput parameters with Manufacturable Values. It can handle multiple objectives and multiple constraints.For details, see Adaptive Multiple-Objective Optimization (AMO) (p. 251).

Note

To display advanced options, go to Tools > Options > Design Exploration > Show Advanced

Options.

To generate samples and perform an Adaptive Multiple-Objective optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Optimization section of the Properties view:

• Method Name: Set to Adaptive Multiple-Objective.

• Type of Initial Sampling: If you do not have any parameter relationships defined, specifyScreening or OSF. Defaults to Screening. If you do have parameter relationships defined, theinitial sampling must be performed by the constrained sampling algorithms (because parameterrelationships constrain the sampling) and this property is automatically set to Constrained

Sampling.

• Random Generator Seed: Advanced option that displays only when Type of Initial Sampling isset to OSF. Specify the value used to initialize the random number generator invoked internallyby the OSF algorithm. The value must be a positive integer. This property allows you to generatedifferent samplings (by changing the value) or to regenerate the same sampling (by keeping thesame value). Defaults to 1.

• Maximum Number of Cycles: Determines the number of optimization loops the algorithm needs,which in turns determines the discrepancy of the OSF. The optimization is essentially combinator-ial, so a large number of cycles will slow down the process; however, this will make the discrepancyof the OSF smaller. The value must be greater than 0. For practical purposes, 10 cycles is usuallygood for up to 20 variables. Defaults to 10.

• Number of Initial Samples: Specify the initial number of samples to be used. The minimum re-commended number of initial samples is 10 times the number of input parameters; the larger theinitial sample set, the better your chances of finding the input parameter space that contains thebest solutions.

Must be greater than or equal to the number of enabled input and output parameters; thenumber of enabled parameters is also the minimum number of samples required to generatethe Sensitivities chart. You can enter a minimum of 2 and a maximum of 10,000. Defaultsto 100.

If you are switching from the Screening method to the MOGA method, MOGA generates anew sample set. For the sake of consistency, enter the same number of initial samples asused for the Screening optimization.

• Number of Samples Per Iteration: Defaults to 100. The number of samples that are iterated andupdated at each iteration. This setting must be less than or equal to the number of initial samples.

• Maximum Allowable Pareto Percentage: Convergence criterion. A percentage which representsthe ratio of the number of desired Pareto points to the Number of Samples Per Iteration. When

143Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Methods

Page 152: Design Exploration Users Guide

this percentage is reached, the optimization is converged. For example, a value of 70% withNumber of Samples Per Iteration set at 200 samples would mean that the optimization shouldstop once the resulting front of the MOGA optimization contains at least 140 points. Of course,the optimization stops before that if the Maximum Number of Iterations is reached.

If the Maximum Allowable Pareto Percentage is too low (below 30) then the process mayconverge prematurely, and if the value is too high (above 80) it may converge slowly. Thevalue of this property depends on the number of parameters and the nature of the designspace itself. Defaults to 70. Using a value between 55 and 75 works best for most problems.For details, see Convergence Criteria in MOGA-Based Multi-Objective Optimization (p. 245).

• Convergence Stability Percentage: Convergence criterion. A percentage which represents thestability of the population, based on its mean and standard deviation. Allows you to minimizethe number of iterations performed while still reaching the desired level of stability. When thispercentage is reached, the optimization is converged. Defaults to 2%. To not take the convergencestability into account, set to 0%. For details, see Convergence Criteria in MOGA-Based Multi-Ob-jective Optimization (p. 245).

• Maximum Number of Iterations: Stop criterion. The maximum possible number of iterations thealgorithm executes. If this number is reached without the optimization having reached convergence,iteration will stop. This also provides an idea of the maximum possible number of function evalu-ations that are needed for the full cycle, as well as the maximum possible time it may take to runthe optimization. For example, the absolute maximum number of evaluations is given by: Numberof Initial Samples + Number of Samples Per Iteration * (Maximum Number of Iterations - 1).

• Mutation Probability: Advanced option allowing you to specify the probability of applying amutation on a design configuration. The value must be between 0 and 1. A larger value indicatesa more random algorithm; if the value is 1, the algorithm becomes a pure random search. A lowprobability of mutation (<0.2) is recommended. Defaults to 0.01. For more information on mutation,see MOGA Steps to Generate a New Population (p. 248).

• Crossover Probability: Advanced option allowing you to specify the probability with which parentsolutions are recombined to generate the offspring solutions. The value must be between 0 and1. A smaller value indicates a more stable population and a faster (but less accurate) solution; ifthe value is 0, then the parents are copied directly to the new population. A high probability ofcrossover (>0.9) is recommended. Defaults to 0.98.

• Type of Discrete Crossover: Advanced option allowing you to determine the kind of crossoverfor discrete parameters. Three crossover types are available: One Point, Two Points, or Uniform.According to the type of crossover selected, the children will be closer to or farther from theirparents (closer for One Point and farther for Uniform). Defaults to One Point. For more informationon crossover, see MOGA Steps to Generate a New Population (p. 248).

• Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidatesto be generated by the algorithm. For details, see Viewing and Editing Candidate Points in theTable View (p. 158).

3. Specify optimization objectives and constraints.

• In the Optimization tab Outline view, select the Objectives and Constraints node or an itemunderneath it.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.144

Using Goal Driven Optimization

Page 153: Design Exploration Users Guide

• In the Table or Properties view, define the Optimization Objectives. For details, see DefiningOptimization Objectives and Constraints (p. 153).

Note

– For the Adaptive Multiple-Objective method, at least one output parameter musthave an objective defined. Multiple objectives are allowed.

– After you have defined an objective, a warning icon in the Message column of theOutline view means that you have exceeded the recommended number of inputparameters. For more information, see Number of Input Parameters for DOETypes (p. 69).

4. Specify the optimization domain.

• In the Optimization tab Outline view, select the Domain node or an input parameter or parameterrelationship underneath it.

• In the Table or Properties view, define the selected domain object. For details, see Defining theOptimization Domain (p. 148).

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and selectthe Update context option.

The result is a group of points or “sample set.” The points that are most in alignment with theobjectives are displayed in the table as the Candidate Points for the optimization. The resultalso displays the read-only field Size of Generated Sample Set in the Properties view.

6. Select the Optimization node in the Outline view and review convergence details in the Optimiz-

ation Status section of the Properties view. The following output properties are displayed:

• Converged: Indicates whether the optimization has converged.

• Number of Evaluations: Number of design point evaluations performed. This value takes allpoints used in the optimization, including design points pulled from the cache. Can be used tomeasure the efficiency of the optimization method to find the optimum design point.

• Number of Failures: Number of failed design points for the optimization. When a design pointfails, an Adaptive Multiple-Objective optimization does not retain this point in the Pareto frontto generate the next population, attempt to solve an additional design point in its place, or includeit on the Direct Optimization Samples chart.

• Size of Generated Sample Set: Number of samples generated in the sample set. This is thenumber of samples successfully updated for the last population generated by the algorithm;usually equals the Number of Samples Per Iteration.

• Number of Candidates: Number of candidates obtained. (This value is limited by the MaximumNumber of Candidates input property.)

7. In the Outline view, you can select the Domain node or any object underneath it to review domaindetails in the Properties and Table views.

145Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Methods

Page 154: Design Exploration Users Guide

8. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view todisplay a listing of the design points that were calculated during the optimization in the Table view.If the raw optimization data point exists in the Design Point Parameter Set, then the correspondingdesign point name is indicated in parentheses under the Name column.

Note

Note that this list is compiled of raw data and does not show feasibility, ratings, etc.for the included design points.

Performing an Optimization with an External Optimizer

In addition to using the optimization algorithms provided by DesignXplorer, you can also use externaloptimizers within the DesignXplorer environment. You can install and load optimization extensions,integrating the features of an external optimizer into your design exploration workflow. An optimizationextension specifies the properties, objectives and constraints, and functionality for one or more optimizers.

Once an extension is installed, it is displayed in the Extension Manager and can be loaded to theproject. Once the extension is loaded to the project, the optimizers defined by the extension are displayedas options in the Method Name property drop-down.

You can also specify whether extensions are saved to your project. For more detailed information onsaving extensions, see the Extensions section in the Workbench User's Guide.

Note

For information on the creation of optimization extensions, see the Application CustomizationToolkit Developer’s Guide and the Application Customization Toolkit Reference Guide. Thesedocuments are part of the ANSYS Customization Suite on the ANSYS Customer Portal. Toaccess documentation files on the ANSYS Customer Portal, go to http://support.ansys.com/documentation.

Related Topics:

Locating and Downloading Available ExtensionsInstalling an Optimization ExtensionLoading an Optimization ExtensionSelecting an External OptimizerSetting Up an External Optimizer ProjectPerforming the Optimization

Locating and Downloading Available Extensions

You can use custom optimization extensions that are developed by your company. You can alsodownload existing extensions from the ANSYS Extension Library (on the Downloads page of the ANSYSCustomer Portal). Optimization extensions for DesignXplorer are located in the ACT Library. To accessdocumentation files on the ANSYS Customer Portal, go to http://support.ansys.com/documentation.

Installing an Optimization Extension

To an optimization extension, you must first install it to make it available to ANSYS Workbench andDesignXplorer.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.146

Using Goal Driven Optimization

Page 155: Design Exploration Users Guide

To install an extension:

1. In the Project Schematic, select the Extensions > Install Extension menu option.

2. Browse to the location of the extension you wish to install.

In most cases, the extension you select will be a “binary” extension in which the optimizer hasalready been defined and compiled into the .wbex file format. (Once the extension is compiled,the optimizer definition can no longer be changed.)

3. Select the extension and click Open.

The extension is installed in your App Data directory. Once installed, it is shown in the Exten-

sions Manager and remains available to be loaded to your project(s).

Loading an Optimization Extension

Once the extension has been installed, it becomes available to be loaded to your ANSYS Workbenchproject.

Note

The extension must be loaded separately to each project unless you have specified it as adefault extension via the Workbench Tools > Options dialog. For more detailed information,see the Extensions section in the Workbench User's Guide.

To load an extension to your project:

1. In the Project Schematic, select the Extensions > Manage Extensions menu option.

2. The Extensions Manager dialog shows all the installed extensions. Select the check boxes for theextension to be loaded to the project and close the dialog.

3. The extension should now be loaded to the project, which means that it is available to be selectedas an optimization method. You can select Extensions > View Log File to verify that the extensionloaded successfully.

Selecting an External Optimizer

Once an extension has been loaded to your project, the external optimizers defined in it are availableto be selected and used as an optimization method. In the Optimization tab Properties view, externaloptimizers are included as options in the Method Name drop down, along with the optimizationmethods provided by DesignXplorer. Once an optimizer is selected, you can edit the optimizationproperties.

DesignXplorer filters the Method Name list for applicability to the current project—i.e., it displays onlythose optimization methods that can be used to solve the optimization problem as it is currently defined.When no objectives or constraints are defined for a project, all optimization methods are displayed inthe Method Name drop down. If you already know that you want to use a particular external optimizer,it is recommended that you select it as the method before setting up the rest of the project. Otherwise,the optimization method could be inadvertently filtered from the list.

In some cases, a necessary extension is not available when a project is reopened. This can happen whenthe extension is unloaded or when the extension used to solve the project was not saved to the project

147Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Methods

Page 156: Design Exploration Users Guide

upon exit. When this happens, the Method Name drop down displays an alternate label, <extension-

name>@<optimizername>, instead of the external optimizer name. You can still do post-processingwith the project (because the optimizer has already solved the problem), but the properties are read-only and you cannot use the optimizer to perform further calculations. When you select a differentoptimization method, the alternate label will disappear from the list of options.

Setting Up an External Optimizer Project

The DesignXplorer interface shows only the optimization functionality that is specifically defined in theextension. Additionally, it filters objectives and constraints according to the optimization method selec-ted—i.e., only those objects supported by the selected optimization method are available for selection.For example, if you have selected an optimizer that does not support the Maximize objective type,Maximize will not be displayed as an option in the Objective Type drop down menu.

If you already have a specific problem you want to solve, it is recommended that you set up the projectbefore selecting an optimization method from the Method Name drop down. Otherwise, the desiredobjectives and constraints could be filtered from the user interface.

Performing the Optimization

Once the external optimizer is selected and the optimization study is defined, the process of runningthe optimization is the same as for any of the native optimization algorithms. DesignXplorer’s interme-diate functionality (such as iterative updates of the History charts and sparklines) and post-processingfunctionality (such as candidate points and other optimization charts) are still available.

Defining the Optimization Domain

When you edit the Optimization component of a Goal Driven Optimization system, there are multipleways of defining the parameter space for each input parameter. The optimization domain settings allowyou to reduce the range of variation for the input so that samples are generated only in the reducedspace (i.e., within the defined bounds), ensuring that only samples that can be used for your optimizationare created. You can do this for two types of domain objects: input parameters and/or parameter rela-tionships. Both types are under the Domain node in the Outline view.

When you select the Domain node or any object underneath it, the Table view shows the input para-meters and parameter relationships that are defined and enabled for the optimization. (Disabled domainobjects do not display in the table.)

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.148

Using Goal Driven Optimization

Page 157: Design Exploration Users Guide

Defining Input Parameters

Under the Domain node in the Outline view, you can enable or disable the input parameters for theparameter space by toggling the check box under the Enabled column for the corresponding inputparameter. When the input parameter is enabled, you can edit the bounds and, where applicable, thestarting value by either of the following methods:

• In the Outline view, select the Domain node. Edit the input parameter domain via the Table view.

• In the Outline view, select an input parameter under the Domain node. Edit the input parameter domainvia either the Properties or Table view.

The following editable settings are available for enabled input parameters in the Properties and Table

views:

Lower Bound

Allows you to define the Lower Bound for the optimization input parameter space; increasing thedefined lower bound would confine the optimization to a subset of the DOE domain. By default, corres-ponds to the following values defined in the DOE:

• the Lower Bound (for continuous parameters)

• the lowest discrete Level (for discrete parameters)

• the lowest manufacturable Level (for continuous parameters with Manufacturable Values)

Upper Bound

Allows you to define the Upper Bound for the input parameter space. By default, corresponds to thefollowing values defined in the DOE:

• the Upper Bound (for continuous parameters)

• the highest discrete Level (for discrete parameters)

• the highest manufacturable Level (for continuous parameters with Manufacturable Values)

149Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Defining the Optimization Domain

Page 158: Design Exploration Users Guide

Starting Value

Available only for the NLPQL and MISQP optimization methods. Allows you to specify where the optim-ization starts for each input parameter.

Because NLPQL and MISQP are gradient-based methods, the starting point in the parameter spacedetermines the candidates found; with a poor starting point, NLPQL and MISQP could find a localoptimum, which is not necessarily the same as the global optimum. This setting gives give youmore control over your optimization results by allowing you specify exactly where in the parameterspace the optimization should begin.

Must fall between the Lower Bound and Upper Bound.

Must fall within the domain constrained by the enabled parameter relationships. For more informa-tion, see Defining Parameter Relationships (p. 150).

For each disabled input parameter, specify the desired value to use in the optimization. By default,the value was copied from the current design point when the Optimization system was created.

Note

When the optimization is refreshed, the disabled input values will be persisted; theywill not be updated to the current design point values.

Defining Parameter Relationships

Parameter relationships give you greater flexibility in defining optimization limits on input parametersthan the standard single-parameter constraints such as Greater Than, Less Than, and Inside Bounds.When you define parameter relationships, you can specify expression-based relationships betweenmultiple input parameters, with the values remaining physically bounded and reflecting the constraintson the optimization problem.

Note

To specify parameter relationships for outputs, you can create derived parameters. You cancreate derived parameters for an analysis system by providing expressions. The derivedparameters are then passed to DesignXplorer as outputs. For more information, see CustomParameters

A parameter relationship is comprised of one operator and two expressions. In the example below, youcan see that two parameter relationships have been defined, each involving input parameters P1 andP2.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.150

Using Goal Driven Optimization

Page 159: Design Exploration Users Guide

This section discusses how to create, edit, enable/disable, and view parameter relationships.

Creating Parameter Relationships

You can create new domain parameter relationships by the following methods:

• In the Outline view, right-click the Parameter Relationships node and select Insert Parameter Relation-

ship from the context menu.

• In the Parameter Relationships section of the Table view, type parameter relationship details into thebottom row.

Editing Parameter Relationships

Once a parameter relationship has been created, you can edit it by the following methods:

• In the Outline view, select either the Domain or the Parameter Relationships node underneath it. Editthe parameter relationship via the Table view.

• In the Outline view, select a parameter relationship under the Parameter Relationships node. Edit theparameter relationship via the Properties or Table view.

The table below shows editing tasks the Properties, Table, and Outline views allow you to perform.

OutlineTablePropertiesTask

XXXRename a relationship by double-clicking itsname and typing in a custom name

XXEdit expressions by typing into the expressioncell

XXChange operators by selecting from the drop-down menu

XXXDuplicate a relationship by right-clicking it andselecting Duplicate

XXXDelete a relationship by right-clicking it andselecting Delete

151Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Defining the Optimization Domain

Page 160: Design Exploration Users Guide

Enabling/Disabling Parameter Relationships

You can enable or disable a parameter relationship by selecting/deselecting the Enabled check box.Enabling a parameter relationship will cause it to be displayed on the corresponding History chart andsparkline. For more information, see Using the History Chart (p. 165).

Parameter Relationship Properties

The following properties are available for parameter relationships:

Name

Editable in the Properties and Table views. Each parameter relationship is given a default Name, suchas “Parameter” or “Parameter 2,” based on the order in which the parameter relationship was created.When you define both the left and right expressions for the parameter relationship, the default nameis replaced by the relationship. For example, “Parameter 2” may become “P1<=P2.” The name will beupdated accordingly when either of the expressions are modified.

The Name property allows you to edit the name of the parameter relationship. Once you edit theName, the name persists. To resume the automated naming system, just delete the custom nameand leave the property empty.

Left Expression/Right Expression

Editable in the Properties and Table views. Allows you to define the parameter relationship expressions.

Left Expression Quantity Name/ Right Expression Quantity Name

Viewable in the Properties view. Shows the quantity type for the expressions in the parameter relationship.

Operator

Editable in the Properties and Table views. Allows you to select the expression operator from a drop-down menu. Available values are <= and >=.

Viewing Current Expression Values

For optimization methods that have a Starting Value (NLPQL and MISQP), each expression for a para-meter relationship is evaluated based on the starting parameter values.

When the evaluation is complete, the value for each expression is displayed:

• In the Domain node Table view. To view expression values for a parameter relationship, click the plusicon next to its name. The values display beneath the corresponding expression.

• In the Candidate Points node Table view. In the Properties view, select the Show Parameter Relation-

ships check box. Parameter relationships that are defined and enabled, along with their expressions (andcurrent expression values, for NLPQL and MISQP), are shown in the Candidate Points table. See Viewingand Editing Candidate Points in the Table View (p. 158).

If the evaluation fails, an error message is displayed in the Outline and Properties views. Review theerror to identify problems with the corresponding parameter relationship.

Note

• The evaluation may fail because the selected optimization method does not supportparameter relationships and or because the optimization includes one or more invalid

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.152

Using Goal Driven Optimization

Page 161: Design Exploration Users Guide

parameter relationships. Parameter relationships may be invalid if they contain quantitiesthat are not comparable, parameters for which the values are unknown, or expressionsthat are incorrect.

• The gradient-based optimization methods, NLPQL and MISQP, require a feasible Starting

Value that falls within the domain constrained by the enabled parameter relationships. Ifthe Starting Value is infeasible, the optimization cannot be updated.

Defining Optimization Objectives and Constraints

When you edit the Optimization cell of a Goal Driven Optimization system and select the Objectives

and Constraints node in the tab Outline view, the Optimization Objectives section in the Table

view allows you to define design goals in the form of objectives and constraints that will be used togenerate optimized designs. Additional optimization properties can be set in the Properties view. Theoptimization approach used in the design exploration environment departs in many ways from tradi-tional optimization techniques, giving you added flexibility in obtaining the desired design configuration.

Note

If you are using an external optimizer, DesignXplorer filters objectives and constraints accord-ing to the optimization method selected (i.e., only the objective types and constraint typessupported by the selected optimization method are available for selection). For example, ifyou have selected an optimizer that does not support the Maximize objective type, Maximize

will not be displayed as an option in the Objective Type drop down menu.

The following optimization settings are available and can be edited in both the Table view and Prop-

erties view.

Objective Name

Allows you to define an objective for each parameter. If desired, you can also enter a name for the ob-jective. Each objective is given a default name based on its defined properties and which is updatedeach time the objective is modified. For example, when an objective is defined to Minimize parameterP1, the objective name is set to "Minimize P1.” If the objective type is changed to Maximize, the ob-jective name becomes "Maximize P1.” Then, if a constraint type of Values >= Bound is added andLower Bound is set to 3, the objective name becomes “Maximize P1; P1 >= 3.” The name is displayedunder the Objectives and Constraints node of the Outline view.

The Objective Name also allows you to edit the name of the objective. in the Optimization tab,you can edit the name of an objective in the Outline view. Additionally, you can select the Objectives

and Constraints node of the Outline view and edit the name in Table view, or select the objectiveitself in the Outline view and edit the name in either the either the Properties or the Table view.Once you edit the Objective Name, the name persists and is no longer changed by modificationsto the objective. To resume the automated naming system, just delete the custom name and leavethe property empty.

Available options vary according to the type of parameter and whether it is an input or outputparameter.

Parameter

Allows you to select the parameter (either input or output) for which the objective is being defined.

153Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Defining Optimization Objectives and Constraints

Page 162: Design Exploration Users Guide

Objective Type

Available for both continuous input parameters and output parameters. Allows you to define an objectiveby specifying the Objective Type (see the tables below for available objective types).

Constraint Type

Available for output parameters, discrete input parameters, and continuous input parameters withManufacturable Values. Allows you to define a constraint by specifying the Constraint Type (see thetables below for available constraint types).

Objective Target

For a parameter that has an Objective Type of Seek Target, allows you to specify the target value forthe objective

Constraint Lower Bound/Constraint Upper Bound

For a parameter with a Constraint Type other than Lower Bound <= Values <= Upper Bound, setLower Bound to the target value for the constraint.

For a parameter with a Constraint Type of Lower Bound <= Values <= Upper Bound, set boththe Lower Bound and the Upper Bound to define an acceptable range for the output. The secondvalue must be greater than the first. Available only for continuous output parameters.

Objectives for Continuous Input Parameters

Mathematical MeaningDescriptionObjectives

(X = an input parameter, XLower = lower limit, XUpper = Upper Limit)

XLower <= X <= XUpperKeep the input parameter within its upperand lower bounds.

No Objective

X → XLowerMinimize the input parameter within itsrange.

Minimize

X → XUpperMaximize the input parameter within itsrange.

Maximize

X → XTargetAchieve an input parameter value thatis close to the objective Target.

Seek Target

Objectives for Output Parameters

GDO Treat-

ment

Mathematical MeaningDescriptionObjective

(Y = output parameter, YTarget = Target)

N/AN/ANo objective is specified.No Objective

ObjectiveMinimize YAchieve the lowest possible value forthe output parameter.

Minimize

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.154

Using Goal Driven Optimization

Page 163: Design Exploration Users Guide

GDO Treat-

ment

Mathematical MeaningDescriptionObjective

ObjectiveMaximize YAchieve the highest possible value forthe output parameter.

Maximize

ObjectiveY → YTargetAchieve an output parameter valuethat is close to the objective Tar-

get.

Seek Target

Constraints for Discrete Input Parameters or Continuous Input Parameters with Manufacturable

Values

Mathematical MeaningDescriptionObjectives

(X = an input parameter, XLower = Lower Bound, XUpper = Upper Bound, XTarget = Target)

XLower <= X <= XUpperUses the full discrete range allowed for theparameter.

No Constraint

X → XLowerBoundSet the constraint to in-range valuesclose to the Lower Bound.

Values = Bound

X >= XLowerBoundSet the constraint to in-range values abovethe Lower Bound.

Values >= LowerBound

X <= XUpperBoundSet the constraint to in-range values belowthe Upper Bound.

Values <= UpperBound

Constraints for Output Parameters

GDO Treat-

ment

Mathematical MeaningDescriptionObjective

(Y = output parameter, YTarget = Target)

N/AN/ANo constraint is specified.No Con-straint

ConstraintY → YLowerBoundIf a Lower Bound is specified, thenachieve an output parameter value thatis close to that bound.

Values =Bound

Inequality con-straint

Y > =YLowerBoundIf a Lower Bound is specified, thenachieve an output parameter value thatis above the Lower Bound.

Values >=Lower Bound

Inequality con-straint

Y <= YUpperBoundIf an Upper Bound is specified, thenachieve an output parameter value thatis below the Upper Bound.

Values <=Upper Bound

ConstraintYLowerBound <= Y <= YUpper-

Bound

given

If the Lower Bound and the Upper

Bound are specified, then achievean output parameter that is withinthe defined range.

LowerBound<= Val-ues <=

YLowerBound < YUpper-

Bound

UpperBound

The following post-processing properties are available and editable in the DSP (Decision Support Process)section of the Properties view:

155Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Defining Optimization Objectives and Constraints

Page 164: Design Exploration Users Guide

Objective Importance/Constraint Importance

For any parameter with an Objective or Constraint defined, allows you to select the relative importanceof that parameter in regard to the other objectives. Available options on the drop-down are Default,Higher, and Lower.

Constraint Lower Bound/Constraint Upper Bound

For each parameter that has a Constraint Type of Lower Bound <= Values <= Upper Bound, allowsyou to enter a value for the Lower Bound and Upper Bound, defining an acceptable range for theoutput. The second value must be greater than the first. Available only for continuous output parameters.

Constraint Handling

For any constrained parameter, allows you to specify the Constraint Handling for that parameter.

This option can be used for any optimization application and is best thought of as a "constraintsatisfaction" filter on samples generated from the optimization runs. This is especially useful forScreening samples to detect the edges of solution feasibility for highly constrained nonlinear optim-ization problems. The following choices are available:

• Relaxed: Samples are generated in the full parameter space, with the constraint only being used toidentify the best candidates. When this option is selected, the upper, lower, and equality constrainedobjectives of the candidate points shown in the Table of Candidate Points are treated as objectives;thus any violation of the objective is still considered feasible.

• Strict (default): Samples are generated in the reduced parameter space defined by the constraint.When this option is selected, the upper, lower, and equality constraints are treated as hard constraints;that is, if any of them are violated then the candidate is no longer displayed.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.156

Using Goal Driven Optimization

Page 165: Design Exploration Users Guide

Objectives, Constraints, and Optimization Methods

Objective and Constraint Requirements According to Optimization Method:

• Screening uses any goals that are defined to color the samples in the Samples chart, but does nothave any requirements concerning the number of objectives or constraints defined.

• MOGA and Adaptive Multiple-Objective require that an objective is defined for at least one outputparameter. This means that an Objective Type is selected for at least one of the output parametersand an Objective Value is defined. Multiple output objectives are allowed.

• NLPQL, MISQP, and Adaptive Single-Objective require that an objective is defined for one outputparameter. This means than an Objective Type is selected one of the output parameters and anObjective Value is defined. Only a single output objective is allowed.

Non-Screening Optimization: When any of the methods other than Screening (i.e., MOGA, NLPQL,MISQP, Adaptive Single-Objective, or Adaptive Multiple-Objective) are used for Goal Driven Optim-ization, the search algorithm uses the Objective Type and target value and/or the Constraint Type

and target range of the output parameters and ignores the similar properties of the input parameters.However, when the candidate points are reported, the first Pareto fronts (in case of MOGA and AdaptiveMultiple-Objective) or the best solution set (in case of NLPQL, MISQP, and Adaptive Single-Objective)are filtered through a Decision Support Process that applies the parameter Objectives and Constraints

and reports the returned candidate points. The candidate points may not necessarily correspond to thebest input targets, but do provide the best designs with respect to the output properties.

Screening Optimization: The Screening option is not strictly an optimization approach; it uses theproperties of the input as well the output parameters, and uses the Decision Support Process on asample set generated by the Shifted Hammersley technique to rank and report the candidate points.The candidate points correspond to both the input and output objectives specified by the user, althoughthe candidate points will not correspond to the best optimal solutions.

Working with Candidate Points

DesignXplorer provides you with multiple ways of viewing and working with candidate points, accordingthe node selected in the Optimization tab Outline view.

• When you select the Optimization node, the Table view displays a summary of candidate data.

• When you select the Candidate Points node, the Table view allows you to view existing candidate pointsand add new custom candidate points. Results are also displayed graphically in the Chart view; for details,see Using the Candidate Points Results (p. 173).

Once candidates are created, you can verify them and also have the option of inserting them into theResponse Surface as other types of points.

Related Topics:

Viewing and Editing Candidate Points in the Table ViewRetrieving Intermediate Candidate PointsCreating New Design, Response, Refinement, or Verification Points from CandidatesVerify Candidates by Design Point Update

157Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Candidate Points

Page 166: Design Exploration Users Guide

Viewing and Editing Candidate Points in the Table View

When you select the Candidate Points node of the Outline view, the Table view displays candidatepoint data on the existing candidate points that were generated by the optimization.

The maximum number of candidate points that can be generated by the optimization is determinedby the value you specify for the Maximum Number of Candidates property. The recommended max-imum number of candidate points varies according to the optimization method being used. For example,you generally need only one candidate for the gradient-based, single-objective algorithms (NLPQL andMISQP), but you can request as many candidates as you want for multiple-objective algorithms (becausethere are several potential candidates for each Pareto front that is generated).

Note

The number of candidate points does not affect the optimization, so you can experimentby changing the value of the Maximum Number of Candidates property and thenupdating the optimization; so long as this is the only property changed, the update onlyperforms post-processing operations and the candidates are generated immediately.

Each candidate point is displayed, along with its input and output values. Output parameter valuescalculated from simulations (design point updates) are displayed in black text, while output parametervalues calculated from a response surface are displayed in the custom color defined in the Options

dialog (see Response Surface Options (p. 23)). The number of gold stars or red crosses displayed nextto each goal-driven parameter indicate how well the parameter meets the stated goal, from three redcrosses (the worst) to three gold stars (the best).

For each parameter with a goal defined, the table also calculates the percentage of variation for allparameters with regard to an initial reference point. By default, the initial reference point for an NLPQLor MISQP optimization is the Starting Point defined in the optimization properties. For a Screening orMOGA optimization, the initial reference point is the most viable candidate, Candidate 1. You can setany candidate point as the initial reference point by selecting the radio button in the Reference column.The Parameter Value column displays the parameter value and stars indicating the quality of thecandidate. In the Variation from Reference column, green text indicates variation in the expecteddirection and red text indicates variation that is not. When there is no obvious direction (as for a con-straint), the percentage value is displayed in black text.

The Name of each candidate point indicates whether the candidate point corresponds to a design pointin the Parameter Set Table of Design Points (i.e. whether the candidate point and design point havethe same input parameter values). If the design point is deleted from the Parameter Set or the definitionof either point is changed, the link between the two points is broken (without invalidating your modelor results) and the indicator is removed from the candidate point’s name.

If parameter relationships are defined and enabled, you can opt to also view parameter relationshipsin the Candidate Points Table view. To do so, select the Show Parameter Relationships check box inthe Properties view. The parameter relationship(s) and their expressions will be shown in the CandidatePoints table. For NLPQL and MISQP, the table will also show current values for each expression.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.158

Using Goal Driven Optimization

Page 167: Design Exploration Users Guide

Creating Custom Candidate Points

The Candidate Points Table view allows you to create custom candidate points. You can add candidatepoints to represent the existing design of a product, the initial design of the parametric study, or otherpoints of interest. You can add a custom candidate point by any of the following methods:

• Select Candidate Points under the Outline view Results node. In the Table view, you can enter data intothe cells of the bottom row of the table. For a Response Surface Optimization, you can also right-click acandidate point row in the Table view and select Insert as Custom Candidate Point.

• Select the Optimization node of the Outline view. For a Response Surface Optimization, you can alsoright-click a candidate point row in the Table view and select Insert as Custom Candidate Point.

• Select any chart under the Outline view Results node. Right-click a point in the chart and select Insert

as Custom Candidate Point.

When a custom candidate point is created in a Response Surface Optimization, the outputs of customcandidates are automatically evaluated from the response surface. When a custom candidate is createdin a Direct Optimization, the outputs of the custom candidates are not brought up to date until thenext real solve.

Once created, the point is automatically plotted in the Candidate Points results that display in the Chart

view and can be treated as any other candidate point. You have the ability to edit the name, edit inputparameter values, and select options from the right-click context menu. In addition to the standardcontext menu options, there is an Update Custom Candidate Point option is available for out-of-datecandidates in a Direct Optimization and a Delete option that allows you to delete a custom candidatepoint.

Retrieving Intermediate Candidate Points

DesignXplorer allows you to monitor an optimization while it is still running by watching the progressof the History chart. See for details. If the History chart indicates that candidate points meeting yourcriteria have been found midway through the optimization, you can stop the optimization and retrievethose results without having to run the rest of the optimization. For details, see Using the HistoryChart (p. 165).

To stop the optimization, click the Show Progress button and click the Interrupt button to the rightof the progress bar. Select either Interrupt or Abort from the dialog that is displayed (intermediateresults will be available in either case).

When the optimization is stopped, candidate points are generated from the data available at that time,such as solved samples, results of the current iteration, the current populations, etc.

159Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Candidate Points

Page 168: Design Exploration Users Guide

To see the state of the optimization at the time it was stopped, view the optimization status and countsin the Optimization node Properties view.

To view the intermediate candidate points, select Candidate Points under the Results node.

Note

DesignXplorer may not be able to return verified candidate points for optimizations thathave been stopped.

When an optimization is stopped midway through, the Optimization component remains in an unsolvedstate. If you change any settings before updating the optimization again, the optimization process muststart over. However, if you do not change any settings before the next update, DesignXplorer makesuse of the design point cache to quickly return to the current iteration.

Creating New Design, Response, Refinement, or Verification Points from

Candidates

Once the objectives are stated and candidate points are generated, the candidate points are listed inthe Optimization table view. They can be stored as response points, design points, refinement pointsor verification points.

You can create new points by right-clicking on a candidate point in the Table view (when either theOptimization node or the Candidate Points node is selected in the Outline view) or right-clicking apoint in any of the optimization charts and selecting an option from the context menu. The optionsavailable depend on the type of optimization. Possible options are:

• Explore Response Surface at Point inserts new a response point into the Table of Response Surface

by copying the input and output parameter values of the selected candidate point(s).

• Insert as Design Point creates a new design point at the project level (edit the Parameter Set in theSchematic view to list the existing design points) by copying the input parameter values of the selectedcandidate point(s). The output parameter values are not copied because they are approximated valuesprovided by the Response Surface.

• Insert as Refinement Point inserts a new refinement point into the Table of Response Surface bycopying the input parameter values of the selected candidate point(s).

• Insert as Verification Point inserts new a verification point into the Table of Response Surface bycopying the input and output parameter values of the selected candidate point(s).

• Insert as Custom Candidate Point creates new custom candidate points into the Table of Candidate

Points by copying the input parameter values of the selected candidate point(s).

For a Response Surface Optimization, the above insertion operations are available for the Raw Optim-

ization Data table and most optimization charts, depending on the context. For instance, it is possible

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.160

Using Goal Driven Optimization

Page 169: Design Exploration Users Guide

to right-click on a point in a Tradeoff chart in order to insert the corresponding sample as a responsepoint or a refinement point or a design point. The same operation is available from a Samples chart.

Note

For a Direct Optimization, only the Insert as Design Point and Insert as Custom Candidate

Point context options are available.

Verify Candidates by Design Point Update

For a Response Surface Optimization, candidate points are verified automatically at the end of the op-timization update if you select the Verify Candidate Points check box in the Properties view of theOptimization tab. Candidate points can also be interactively verified after they are created by right-clicking one or more candidate points in the Table view (when either the Optimization or the Candidate

Points node is selected in the Outline view) and selecting the Verify by Design Points Update optionfrom the context menu; this option is available for both optimization-generated and custom candidatepoints.

DesignXplorer verifies candidate points by creating and updating design points with a "real solve," usingthe input parameter values of the candidate points. The output parameter values for each candidatepoint are displayed in a separate row. For a Response Surface Optimization, verified candidates areplaced next to the row containing the output values generated by the response surface; the sequencevaries according to sort order. Output parameter values calculated from simulations (design point updates)are displayed in black text, while output parameter values calculated from a response surface are dis-played in the custom color defined in the Options dialog (see Response Surface Options (p. 23)).

In a Response Surface Optimization, if there is a large difference between the results of the verified andunverified rows for a point, it could indicate that the response surface is not accurate enough in thatarea and perhaps refinement or other adjustments are necessary. In this case, it is possible to insert thecandidate point as a refinement point. You will then need to recompute the Goal Driven Optimizationso that the refinement point and the new response surface are taken into account.

Note

• This option could be convenient when the update of a design point is fast enough.

• Often the candidate points will not have the ideal input parameters, such as a thickness thatis 0.127 instead of the more practical 0.125. Many users prefer to use the right-click option toturn this candidate into a design point, edit the design point input parameters, and then runthat design point instead of the candidate point.

161Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Candidate Points

Page 170: Design Exploration Users Guide

To solve the verification points, DesignXplorer uses the same mechanism used to solve DOE points. Theverification points are either deleted or persisted after the run as determined by the usual DesignXplorerPreserve Design Points after a DX Run option. As usual, if the update of the verification point fails,it is preserved automatically in the project; you can explore it (as a design point) by editing the Para-

meter Set bar on the Project Schematic.

Goal Driven Optimization Charts and Results

After solving an Optimization cell, the following results or charts are created by default under theResults node of the Outline view: the Candidate Points results, the Sensitivities chart, the Tradeoff

chart, and the Samples chart.

After the first solve of the Optimization cell:

• The Convergence Criteria chart displays by default in the Chart view.

• The History chart is available for the following objects in the Outline view:

– Objectives and constraints

– Enabled input parameters

– Enabled Parameter relationships

With the exception of the Candidate Points results, the Convergence Criteria chart, and the Historychart, it is possible to duplicate charts. Select the chart from the Outline view and either select Duplicate

from the context menu or use the drag and drop mechanism. This operation will attempt an update ofthe chart so that the duplication of an up-to-date chart results in an up-to-date chart.

Related Topics:

Using the Convergence Criteria ChartUsing the History ChartUsing the Candidate Points ResultsUsing the Sensitivities Chart (GDO)Using the Tradeoff ChartUsing the Samples Chart

Using the Convergence Criteria Chart

Once the optimization has been updated for the first time, the Convergence Criteria chart allows youto view the evolution of the convergence criteria for the selected iterative optimization method.

Note

• The Convergence Criteria chart is not available for the Screening optimization method.

• When an external optimizer is used, the Convergence Criteria chart will be generated if datais available.

The Convergence Criteria chart displays by default in the Chart view when the Optimization node orConvergence Criteria node is selected in the Outline view. The rendering and logic of the chart variesaccording to whether you are using a multiple-objective or single-objective optimization method.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.162

Using Goal Driven Optimization

Page 171: Design Exploration Users Guide

The chart is updated after each iteration, so you can use it to monitor the progress of the optimization.When the convergence criteria have been met, the optimization stops and the chart remains available.

Related Topics:

Using the Convergence Criteria Chart for Multiple-Objective OptimizationUsing the Convergence Criteria Chart for Single-Objective Optimization

Using the Convergence Criteria Chart for Multiple-Objective Optimization

When a multiple-objective optimization method such as MOGA or AMO is being used, the ConvergenceCriteria chart plots the evolution of the Maximum Allowable Pareto Percentage and the Convergence

Stability Percentage convergence criteria. For detailed information on these criteria, see ConvergenceCriteria in MOGA-Based Multi-Objective Optimization (p. 245).

Note

If an external optimizer that supports multiple objectives is being used, the ConvergenceCriteria chart will display the data that is available.

Before running your optimization, you can specify values for these convergence criteria. To do so, selectthe Optimization node in the Outline view and then edit the values in Optimization section of theProperties view.

The Convergence Criteria chart for a multiple-objective optimization is notated as follows:

• The number of iterations is displayed along the X-axis.

• The convergence criteria percentage is displayed along the Y-axis.

• The Maximum Allowable Pareto Percentage is represented by a solid red line.

• The evolution of the Pareto Percentage is represented by a dashed red curve.

• The Convergence Stability Percentage is represented by a solid green line.

• The Stability Percentage is represented by a dashed green curve.

• For MOGA, the convergence criteria values are displayed in the legend.

163Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Charts and Results

Page 172: Design Exploration Users Guide

You can control the display of the chart by enabling or disabling the convergence criteria. To do so,select the Convergence Criteria node in the Outline view. In the Criteria section of the Properties

view, select or deselect the Enabled check box for each criterion to specify whether it should be displayedon the chart.

Using the Convergence Criteria Chart for Single-Objective Optimization

When a single-objective optimization method such as NLPQL, MISQP, or ASO is being used, the Conver-gence Criteria chart plots the evolution of the best candidate point for the optimization by identifyingthe best point for each iteration.

Note

The convergence criteria for these methods, the Allowable Convergence Percentage forNLPQL and MISQP and the Convergence Tolerance for ASO, have variations that are too

small (10-6) to be easily displayed on the chart. This will be addressed in a future release.

Before running your optimization, you can specify values for the convergence criteria relevant to yourselected optimization method. (Although these criteria will not be explicitly shown on the chart, theywill affect the optimization and the selection of the best candidate.) To do so, select the Optimization

node in the Outline view and then edit the values in Optimization section of the Properties view.

The Convergence Criteria chart for a single-objective optimization is notated as follows:

• The Number of Iterations is displayed along the X-axis.

• The Output Value is displayed along the Y-axis.

• The evolution of the Best Candidate is represented by a solid red curve.

• Best candidates that are represented by red points along the curve.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.164

Using Goal Driven Optimization

Page 173: Design Exploration Users Guide

• Best candidates that are infeasible points are represented by gray squares along the curve.

Using the History Chart

Once the optimization has been updated for the first time, the History chart allows you to view theoptimization history of an enabled objective/constraint, input parameter, or parameter relationship.

Additionally, it gives you the option of monitoring the progress of the selected object while the optim-ization is still in progress. If you select an object during an update, the chart refreshes automaticallyand shows the evolution of the objective/constraint, input parameter, or parameter relationshipthroughout the update. (For the iterative optimization methods, the chart is refreshed after each iteration.For the Screening method, it is only updated when the optimization update is complete.) You can selecta different object at any time during the update in order to plot and view a different chart.

The History charts remain available when the update is completed. In the Outline view, a sparklineversion of the History chart is displayed for each objective/constraint, input parameter, or parameterrelationship.

If the History chart indicates that the optimization has converged midway through the process, youcan stop the optimization and retrieve the results without having to run the rest of the optimization.For details, see Retrieving Intermediate Candidate Points (p. 159).

Note

Failed design points are not displayed on the History chart.

You can access the History chart by selecting an objective or a constraint under the Objectives and

Constraints node or an input parameter or parameter relationship under the Domain node of theOutline view in the Optimization tab.

165Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Charts and Results

Page 174: Design Exploration Users Guide

Related Topics:

Working with the History Chart in the Chart ViewViewing History Chart Sparklines in the Outline ViewUsing the Objective/Constraint History ChartUsing the Input Parameter History ChartUsing the Parameter Relationships History Chart

Working with the History Chart in the Chart View

The rendering of the History chart varies, not only according to whether an objective/constraint, inputparameter, or parameter relationship is being charted, but also according to the optimization methodbeing used. The History chart is notated as follows:

• Objective values, which fall within the Optimization Domain defined for the associated parameter,are listed vertically along the left side of the chart.

• Each of the points in the sample set (as defined by the Size of Generated Sample Set property) islisted horizontally along the bottom of the chart.

• The evolution of the object is represented by a red curve.

• Bounds for constraints are represented by gray dashed lines.

• Target values are represented by blue dashed lines.

You can hover your mouse over any data point in the chart to view the X and Y coordinates.

Screening For a Screening optimization, which is non-iterative, the History chart displays all thepoints of the sample set. The chart is updated when all of the points have been evaluated. The plotreflects the non-iterative process, with each point visible on the chart.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.166

Using Goal Driven Optimization

Page 175: Design Exploration Users Guide

MOGA For a MOGA optimization, the History chart displays the evolution of the population of pointsthroughout the iterations in the optimization. The chart is updated with the most recent population(as defined by the Number of Samples per Iteration property) at the end of each iteration

NLPQL and MISQP For an NLPQL or MISQP optimization, the History chart enables you to trace theprogress of the optimization from a defined starting point. The chart displays the objective value asso-ciated with the point used for each iteration. Note that the points used to evaluate the derivative valuesare not displayed. The plot reflects the gradient optimization process, with the point for each iterationvisible on the chart.

167Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Charts and Results

Page 176: Design Exploration Users Guide

ASO For an Adaptive Single-Objective optimization, the History chart enables you to trace theprogress of the optimization through a specified maximum number of evaluations. On the Input Para-meter History chart, the upper and lower bounds of the input parameter are represented by blue curves,allowing you to see the domain reductions narrowing toward convergence.

The chart displays the objective value corresponding to LHS or verification points. All evaluated pointsare displayed on the plot.

AMO For an Adaptive Multiple-Objective optimization, the History chart displays the evolution ofthe population of points throughout the iterations in the optimization. Each set of points (the numberof which is defined by the Number of Samples Per Iteration property) displayed on the chart correspondsto the population used to generate the next population. Points corresponding to real solve are plottedas black points, and points from the response surface are plotted with a square colored as specified inthe Options dialog (see Response Surface Options (p. 23)). The plot reflects the iterative optimizationprocess, with each iteration visible on the chart.

All candidate points generated by the optimization are real design points.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.168

Using Goal Driven Optimization

Page 177: Design Exploration Users Guide

Viewing History Chart Sparklines in the Outline View

When a History chart for an objective/constraint, input parameter, or parameter relationship is generated,the same data is used to generate a sparkline image of the History chart. The sparklines for definedobjects are displayed in the Outline view and are refreshed dynamically during the update, allowingyou to follow the update progress for all of the objects simultaneously.

The History chart sparkline notation is similar to that of the History chart in the Chart view, as follows:

• If no constraints are present, sparklines are gray.

• If constraints are present:

– Sparklines are green when the constraint or parameter relationship is met.

– Sparklines are red when the constraint or parameter relationship is violated. Note that whenparameter relationships are enabled and taken into account, the optimization should not pick un-feasible points.

• Targets are represented by blue dashed lines.

• Bounds for constraints are represented by gray dashed lines.

In the image below:

• In the Outline view, the sparkline for Minimize P9; P9 <= 14000 N is entirely green, indicating thatthe constraints are met throughout the optimization history.

• In the Outline view, the sparkline for Maximize P7; P7 >= 13000 is both red and green, indicatingthat the constraints are violated at some points and met at others.

169Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Charts and Results

Page 178: Design Exploration Users Guide

• The History chart for the constraint Maximize P7; P7 >= 13000 is shown in the Charts view. Thepoints beneath the dotted gray Lower Bound line are infeasible points.

Using the Objective/Constraint History Chart

To generate the History chart for an enabled objective or constraint, update the optimization and thenselect the objective or constraint under the Objectives and Constraints node in the Optimization tabOutline view. If you change your selection during the update, the chart refreshes automatically, allowingyou to monitor the update process.

The History chart for an objective or constraint displays a red curve to represent the evolution of theparameter for which an objective or constraint has been defined. Constraints are represented by graydashed lines, and target values by a blue dashed line.

In the example below, the History chart has been plotted for output parameter P9 – WB_BUCK in aMOGA optimization, and the parameter is constrained such that it must have a value lesser than orequal to 1100. The constraint is represented by the dotted gray line.

Given a Constraint Type of Maximize and an Upper Bound of 1100, the area under the dotted grayline represents the infeasible domain.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.170

Using Goal Driven Optimization

Page 179: Design Exploration Users Guide

Using the Input Parameter History Chart

To generate the History chart for an enabled input parameter, update the optimization and then selectthe input parameter under the Domain node in the Optimization tab Outline view. If you change yourselection during the update, the chart refreshes automatically, allowing you to monitor the updateprocess.

For an input parameter, the History chart displays a red curve to represent the evolution of the para-meter for which the objective has been defined. If an objective or constraint is defined for the parameter,the same chart will display when the objective/constraint or the input parameter is selected.

In the example below, the History chart has been plotted for input parameter P3 – WB_L in a NLPQLoptimization. For P3 – WB_L, a Starting Value of 100, a Lower Bound of 90, and an Upper Bound of110 have been defined. The optimization converged upward to the upper bound for the parameter.

171Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Charts and Results

Page 180: Design Exploration Users Guide

Using the Parameter Relationships History Chart

To generate the History chart for an enabled input parameter relationship, update the optimization andthen select the parameter relationship under the Domain node in the Optimization tab Outline view.If you change your selection during the update, the chart refreshes automatically, allowing you tomonitor the update process.

For a parameter relationship, the History chart shows two curves to represent the evolution of the leftexpression and right expression of the relationship. The number of points are along the X axis and theexpression values are along the Y axis.

In the example below, the History chart has been plotted for parameter relationship P2 > P1 in aScreening optimization.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.172

Using Goal Driven Optimization

Page 181: Design Exploration Users Guide

Using the Candidate Points Results

The Candidate Points results, which are displayed in both the Table and the Chart view, allow you toview different kinds of information about candidate points. It allows you to specify one or more para-meters for which you want to display candidate data. In the Chart view, the legend’s color-coding allowsyou to to view and interpret the samples, candidate points identified by the optimization, candidatesinserted manually, and candidates for which output values have been verified by a design point update.You can specify the chart’s properties to control the visibility of each axis, feasible samples, candidatesyou’ve inserted manually, and candidates with verified output values. For details on the results displayedin the Table view, see Viewing and Editing Candidate Points in the Table View (p. 158).

To generate the Candidate Points results, update the optimization and then select Candidate Points

under the Results node in the Optimization tab Outline view.

Note

If the project was created in an earlier version of ANSYS Workbench version 14.0 or earlierand you are opening it for the first time, you must migrate the project database manuallybefore generating the Candidate Points results. To do so, click the migration icon next toCandidate Points and then update the optimization. The results are generated automaticallyand can be viewed by selecting Candidate Points.

Related Topics:

Understanding the Candidate Points Results DisplayCandidate Points Results: Properties

Understanding the Candidate Points Results Display

The rendering of the Candidate Points results that are displayed in the Chart view varies accordingto your selections in the Properties view. Once the results are generated, you can adjust the properties

173Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Charts and Results

Page 182: Design Exploration Users Guide

to change what is displayed. In the results for an NLPQL optimization example below, with Coloring

Method set to by Candidate Type, you can see that both samples and candidate points are displayedas follows:

• Feasible samples are represented by pale orange curves.

• The starting point is represented by an orange curve.

• Each of the three candidate points generated by the optimization is represented by a green curve.

• Each of the custom candidate points is represented by a purple curve.

If you set Coloring Method to by Source Type, the samples will be colored according to the sourcefrom which they were calculated, following the color convention used for data in the Table view.Samples calculated from a simulation are represented by black curves, while ones calculated from aresponse surface are represented by a curve in a custom color specified in the Options dialog (see Re-sponse Surface Options (p. 23)).

When you move your mouse over the results, you can pick out individual objects, which becomehighlighted in orange. When you select a point, the parameter values for the point are displayed in theValue column of the Properties view.

For each parameter listed across the bottom of the results display, there is a vertical line. When youmouse over it, two “handles” appear at the top and bottom of the line. Drag the handles up or downto narrow the focus down to the parameters ranges that interest you.

When you select a point on the results, the right-click context menu provides options for exportingdata and saving candidate points as design, refinement, verification, or custom candidate points.

Candidate Points Results: Properties

To specify the properties of the Candidate Points results that display in the Table and Chart views,expand the Results node in the Optimization Outline view. Select Candidate Points and then edit theproperties in the Properties view. Available properties vary according to the type of optimization selected.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.174

Using Goal Driven Optimization

Page 183: Design Exploration Users Guide

Table

Determines the properties of the results displayed in the Table view. For the Show Parameter Rela-

tionships property, select the Value check box to display parameter relationships in the results Table

view.

Chart

Determines the properties of the results displayed in the Chart view. Select the Value check box toenable the property.

• Display Full Parameter Name: Select to display the full parameter name in the results.

• Show Candidates: Select to show candidates in the results.

• Show Samples: Select to show samples in the results.

• Show Starting Point: Select to show the starting point in the results (NLPQL and MISQP only).

• Show Verified Candidates: Select to show verified candidates in the results (Response Surface Optimizationonly. Candidate verification is not necessary for Direct Optimization because the points result from a realsolve, rather than an estimation).

• Coloring Method: Select whether the results should be colored by candidate type or source type, as follows:

– by Candidate Type: Different colors are used for different types of candidate points. Default value.

– by Source Type: Output parameter values calculated from simulations are displayed in black. Outputparameter values calculated from a response surface are displayed in a custom color selected in theOptions dialog (see Response Surface Options (p. 23)

Input Parameters

Each of the input parameters is listed in this section. Under Enabled, you can enable or disable eachinput parameter by selecting or deselecting the check box. Only enabled input parameters are shownin the results.

Output Parameters

Each of the output parameters is listed in this section. Under Enabled, you can enable or disable eachoutput parameter by selecting or deselecting the check box. Only enabled output parameters are shownin the results.

Generic Chart Properties

You can modify various generic chart properties for the results. See Setting Chart Properties for details.

Using the Sensitivities Chart (GDO)

The Sensitivities chart shows the global sensitivities of the output parameters with respect to the inputparameters. The chart can be displayed as a bar chart or a pie chart by changing the Mode in the chart'sProperties view. You can also select which parameters you want to display on the chart from theProperties view by selecting or deselecting the check box next to the parameter in the Properties

view.

175Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Charts and Results

Page 184: Design Exploration Users Guide

Various generic chart properties can be changed for this chart.

Note

The GDO Sensitivities chart is available only for the MOGA optimization method.

If the p-Value calculated for a particular input parameter is above the Significance Level

specified in the Design Exploration section of the Tools > Options dialog, the bar for thatparameter will be shown as a flat line on the chart. See Determining Significance (p. 55) formore information.

Using the Tradeoff Chart

When an Optimization component is solved, the Tradeoff chart is created by default under the underthe Results node of the Outline view. The Tradeoff chart is a 2D or 3D scatter chart representing thegenerated samples. The colors applied to the points represent the Pareto front they belong to, fromred (the worst) to blue (the best points).

The chart is controlled by changing its properties in the Properties view.

• The chart can be displayed as in 2D or 3D by changing the Mode in the chart's Properties view.

• You can also select which parameter to display on each axis of the chart by selecting the parameter fromthe drop down menu next to the axis name in the Properties view.

• Limit the Pareto fronts shown by moving the slider or entering a value in the field above the slider in thechart's Properties view.

• Various generic chart properties can be changed for this chart.

Using Tradeoff Studies

In the Tradeoff chart, the samples are ranked by non-dominated Pareto fronts, and you can view thetradeoffs that most interest you. To make sense of the true nature of the tradeoffs, the plots must beviewed with the output parameters as the axes. This approach shows which goals can be achieved andwhether this entails sacrificing the goal attainment of other outputs. Typically, a Tradeoff chart showsyou a choice of possible, non-dominated solutions from which to choose.

When an optimization is updated, you can view the best candidates (up to the requested number) fromthe sample set based on the stated objectives. However, these results are not truly representative ofthe solution set, as this approach obtains results by ranking the solution by an aggregated weightedmethod. Schematically, this represents only a section of the available Pareto fronts. Changing the weights(using the Objective Importance or Constraint Importance property in the Properties view) will displaydifferent such sections. This postprocessing step helps in selecting solutions if you are sure of yourpreferences for each parameter.

Figure 1: Three-Dimensional Tradeoff Chart Showing First Pareto Front (p. 177) shows the results of thetradeoff study performed on a MOGA sample set. The first Pareto front (non-dominated solutions) isshown as blue points on the output-axis plot. The slider in the Properties view can be moved to theright to add more fronts, effectively adding more points to the Tradeoff chart. The additional pointsadded this way are inferior to the points in the first Pareto front in terms of the objectives or constraintsyou specified, but in some cases, where there are not enough first Pareto front points, these may be

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.176

Using Goal Driven Optimization

Page 185: Design Exploration Users Guide

necessary to obtain the final design. You can right-click on individual points and save them as designpoints or response points.

Figure 1: Three-Dimensional Tradeoff Chart Showing First Pareto Front

It is also important to discuss the representation of feasible and infeasible points (infeasible points areavailable if any of the objectives are defined as constraints). The MOGA algorithm always ensures thatfeasible points are shown as being of better quality than the infeasible points, and different markersare used to indicate them in the chart. In the 2-D and 3-D Tradeoff chart, colored rectangles denotefeasible points and gray circles denote infeasible ones. You can enable or disable the display of infeasiblepoints in the chart Properties view.

Also, in both types of charts, the best Pareto front is blue and the fronts gradually transition to red(worst Pareto front). Figure 2: Two-Dimensional Tradeoff Chart Showing Feasible and InfeasiblePoints (p. 178) is a typical 2-D Tradeoff chart with feasible and infeasible points.

177Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Charts and Results

Page 186: Design Exploration Users Guide

Figure 2: Two-Dimensional Tradeoff Chart Showing Feasible and Infeasible Points

For more information on Pareto fronts and Pareto-dominant solutions, see Principles (GDO) (p. 225).

Using the Samples Chart

The Samples chart is a postprocessing feature that allows you to explore a sample set given definedgoals. After solving an Optimization cell, a Samples chart will appear in the Outline under Results.

The aim of this chart is to provide a multidimensional graphical representation of the parameter spaceyou are studying. The chart uses the parallel Y axes to represent all of the inputs and outputs. Eachsample is displayed as a group of line curves where each point is the value of one input or outputparameter. The color of the curve identifies the Pareto front that the sample belongs to, or the chartcan be set so that the curves display the best candidates and all other samples.

Compared to the Tradeoff chart, the Samples chart has the advantage of showing all the parametersat once, whereas the Tradeoff chart can only show three parameters at a time. The Samples chart isbetter for exploring the parameter space.

The Samples chart is a powerful exploration tool because of its interactivity. With the Samples chartaxis sliders, you can filter for each parameter very easily, providing an intuitive way to explore the al-ternative designs. Samples are dynamically hidden if they fall outside of the bounds. Repeating thesame operation with each axis allows you to manually explore and find trade-offs.

The Samples chart has two modes: to display the candidates with the full sample set, or to display thesamples by Pareto front (same as the Tradeoff chart). If you select the Pareto Fronts mode, the Coloring

Method property allows you to specify if the samples should be colored according to sample ParetoFront.

In the image below, the Samples chart is in Pareto Fronts mode and is colored by Pareto Front. Thegradient ranges from blue (best) to red (worst)

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.178

Using Goal Driven Optimization

Page 187: Design Exploration Users Guide

In the Candidates mode, the Samples chart Coloring Method is always by Candidates, as shown inthe image below.

Samples Chart: Properties

To specify the properties of a Samples chart, expand the Results node in the Optimization Outline

view. Select Samples and then edit the chart properties in the Properties view. Note that availableproperties vary according to the type of optimization selected.

Chart Properties

Determine the properties of the chart. Select the Value check box to enable the property.

• Display Full Parameter Name: Select to display the full parameter name on the chart.

• Mode: Specify whether the chart will be displayed by candidates or Pareto fronts. If Pareto Fronts is se-lected, the following option is available:

– Coloring Method: Specify whether chart is colored by Samples or by Pareto Fronts.

• Number of Pareto Fronts to Show: Specify the number of Pareto fronts to be displayed on the chart.

Input Parameters

179Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Charts and Results

Page 188: Design Exploration Users Guide

Each of the input parameters is listed in this section. Under Enabled, you can enable or disable eachinput parameter by selecting or deselecting the check box. Only enabled input parameters are shownon the chart.

Output Parameters

Each of the output parameters is listed in this section. Under Enabled, you can enable or disable eachoutput parameter by selecting or deselecting the check box. Only enabled output parameters are shownon the chart.

Generic Chart Propeties

You can modify various generic chart properties for this chart. See Setting Chart Properties for details.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.180

Using Goal Driven Optimization

Page 189: Design Exploration Users Guide

Using Six Sigma Analysis

This section contains information about running a Six Sigma Analysis. For more information, see SixSigma Analysis Component Reference (p. 46) and Understanding Six Sigma Analysis (p. 257).

Performing a Six Sigma AnalysisUsing Statistical PostprocessingStatistical Measures

Performing a Six Sigma Analysis

To perform a Six Sigma analysis, you must first add a Six Sigma Analysis template to your schematicfrom the Design Exploration section of the toolbox. You then edit the Design of Experiments (SSA)

cell of the Six Sigma Analysis and specify which input parameters to evaluate, assign distributionfunctions to them as described in Setting Up Uncertainty Variables (p. 191), and specify the distributionattributes (this will generate your samples for each input variable). Once this is completed, the procedurefor a Six Sigma Analysis is:

Solve the Six Sigma Analysis by:

• updating each individual cell in the analysis either from the right menu in the Project Schematic orthe Update button in the tab toolbar while editing the cell.

• right-clicking on the Six Sigma Analysis cell in the Project Schematic and selecting Update to updatethe entire analysis at once.

• clicking the Update Project button in the Project Schematic toolbar to update the entire project.

Using Statistical Postprocessing

From the Six Sigma Analysis tab you can see the statistics in the Properties view, the probability tablesin the Table view (Tables (SSA) (p. 181)), and statistics charts in the Chart view (Using Parameter Charts(SSA) (p. 182)) associated with each parameter by selecting a parameter in the Outline. You can openthe Sensitivities chart by selecting it from the Charts section of the Outline (see Using the SensitivitiesChart (SSA) (p. 182)).

Related Topics:

Tables (SSA)Using Parameter Charts (SSA)Using the Sensitivities Chart (SSA)

Tables (SSA)

On the Six Sigma Analysis tab, you are able to view the probability tables of any input or output variable.To select which table view is displayed, select the desired value (Quantile-Percentile or Percentile-

Quantile) for the Probability Table field in the parameter's Properties view.

181Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 190: Design Exploration Users Guide

You can modify both types of tables by adding or deleting values. To add a value to the Quantile-Per-centile table, type the desired value into the New Parameter Value cell at the end of the table. A rowwith the value you entered will be added to the table in the appropriate location. To add a new valueto the Percentile-Quantile table, type the desired value into the appropriate cell (New Probability Value

or New Sigma Level) at the end of the table. To delete a row from either table, right-click on the rowand select Remove Level from the menu. The row will be deleted from the table. You can also overwriteany value in an editable column and corresponding values will be displayed in the other columns inthat row.

Using Parameter Charts (SSA)

You can review the statistical results of the analysis by selecting an input or output parameter to viewits Chart. The results of a Six Sigma Analysis are visualized using histogram plots and cumulative distri-bution function plots.

Various generic chart properties can be changed for this chart.

Using the Sensitivities Chart (SSA)

If you select Sensitivities from the Outline view of the Six Sigma tab, you can review the sensitivitiesderived from the samples generated for the Six Sigma Analysis. The Six Sigma Analysis sensitivities areGlobal Sensitivities, not Local Sensitivities. In the Properties view for the Sensitivities chart, you canchoose the output parameters for which you want to review sensitivities, and the input parameters thatyou would like to evaluate for the output parameters.

Various generic chart properties can be changed for this chart.

Note

If the p-Value calculated for a particular input parameter is above the Significance Level

specified in the Design Exploration section of the Tools > Options dialog, the bar for thatparameter will be shown as a flat line on the chart. See Determining Significance (p. 55) formore information.

Statistical Measures

When the Six Sigma Analysis is updated, the following statistical measures are displayed in the Properties

view for each parameter

• Mean

• Standard Deviation

• Skewness

• Kurtosis

• Shannon Entropy (Complexity)

• Signal-Noise Ratio (Smaller is Better)

• Signal-Noise Ratio (Nominal is Best)

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.182

Using Six Sigma Analysis

Page 191: Design Exploration Users Guide

• Signal-Noise Ratio (Larger is Better)

• Sigma Minimum

• Sigma Maximum

183Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Statistical Measures

Page 192: Design Exploration Users Guide

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.184

Page 193: Design Exploration Users Guide

Working with DesignXplorer

This section provides information on the following topics:Working with ParametersWorking with Design PointsWorking with SensitivitiesWorking with TablesWorking with Remote Solve Manager and DesignXplorerWorking with Design Exploration Project Reports

Working with Parameters

General Information

Parameters are exposed from the individual analysis applications. Edit the cells (DOE, Response Surface,Parameters Correlation, etc.) of a design exploration template to see the available input and outputparameters in the Outline view. You can set options for those parameters (such as upper and lowerlimits) by clicking on the parameter in the Outline and then setting the options for that parameter inthe Properties view. The Properties view will also give you additional information about the parameters.

Note

If you modify your analysis database after a design exploration solution (causing the para-meters to change), your design exploration studies may not reflect your analysis anymore.If you modify your analysis, the Update Required symbol will appear on the DesignXplorercells to indicate that they are out of date.

Parameters as Design Variables for Optimization

When you create a design exploration study, DesignXplorer captures the current value of each inputparameter. The initial value becomes the default value of the parameter in DesignXplorer and is usedto determine the default parameter range (+/- 10% of the parameter's initial value). These default valuesmay not be valid in certain situations, so be sure to verify that the ranges are valid with respect to theinput geometry and Analysis System scenario.

Effects of a Parameter Change on Design Exploration Systems

When the parameters or the project are changed, the design exploration systems are turned to theRefresh required state. On the Refresh operation, the design exploration results can be partially orcompletely invalidated depending on the nature of the change. This includes design points results andcache of design point results which means the Update operation will recalculate all of them.

• When a direct input or direct output parameter is added or deleted in the project, or when the unitof a parameter is changed, all results including design points results and cache of design point resultsare invalidated on Refresh. All design points will be recalculated on Update.

185Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 194: Design Exploration Users Guide

• When non-parametric changes occur in the model (e.g. change to the direction of a non-parameterizedforce, the shape of the geometry, the value of a non-parameterized dimension of the model, etc.),all results including design points results and cache of design point results are invalidated on Refresh.All design points will be recalculated on Update.

• When a derived output parameter is added or deleted in the project, or when the expression of aderived output parameter is modified, all results but the cache of design point results are invalidatedon Refresh. So, on Update, all design point results will be retrieved without any new calculation. Therest of the design exploration results are recalculated.

• Because DesignXplorer is primarily concerned with the range of variation in a parameter, changes toa parameter value in the model or the Parameter Set bar are not updated to existing design explor-ation systems with a Refresh or Update operation. The parameter values that were used to initializea new design exploration system remain fixed within DesignXplorer unless you change them manually.

Note

In some situations, non-parametric changes may not be detected and it is necessary to performa Clear Generated Data operation followed by an Update operation in order to synchronizethe design exploration systems with the new state of the project. For example, editing theinput file of a Mechanical APDL system outside of Workbench is a non-parametric changethat is not detected. In some rare cases, Inserting, Deleting, Duplicating or Replacing systemsin a project may not be as reliably detected.

You can change the way that units that are displayed in your design exploration systemsfrom the Units menu. Changing the units display in this manner simply causes the existingdata in each system to be displayed in the new units system, and does not require an updateof the design exploration systems.

Related Topics:

Input ParametersSetting Up Design VariablesSetting Up Uncertainty VariablesOutput Parameters

Input Parameters

By defining and adjusting your input parameters, you define the analysis of the model under investigation.This section discusses how to work with input parameters, and addresses the following topics:

Defining Discrete Input Parameters (p. 187)

Defining Continuous Input Parameters (p. 188)

Defining Manufacturable Values (p. 188)

Changing Input Parameters (p. 190)

Changing Manufacturable Values (p. 191)

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.186

Working with DesignXplorer

Page 195: Design Exploration Users Guide

Defining Discrete Input Parameters

A discrete parameter physically represents a configuration or state of the model—for example, thenumber of holes or the number of weld points in a geometry. When you define a discrete parameter,you can define two or more Levels that contain the parametric values to be used in the analysis.

To define a parameter as discrete, select the parameter in the Outline view for the DOE. Then, in theProperties view, select Discrete as the Classification for the parameter. By default, the Number of

Levels cell in the Properties view is set to 2, the minimum allowable level. These levels are displayedin the Table view.

The Number of Levels cannot be changed in the Properties view. To change the Number of Levels

for the parameter, you must add or delete a Level in the Table view. You can also edit the values forexisting Levels in the Table view.

• To add a Level, select the empty cell in the bottom row of the Discrete Value column, type in an integer,and press Enter. The Number of Levels in the Properties view is updated automatically.

• To delete a Level, right-click on any part of the row containing the Level to be removed and select Delete

from the context menu.

• To change the value of a Level, select the cell containing the value to be changed, type in an integer, andpress Enter.

Note

The Discrete Value column of the Table view is not sorted as you enter or delete Levels oredit Level values. To sort it manually, click the down-arrow on the right of the header cell.Once you sort the column, the values will auto-sort as you add Levels or edit Level values.

187Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Parameters

Page 196: Design Exploration Users Guide

Defining Continuous Input Parameters

A continuous parameter is one that physically varies in a continuous manner within a specific range ofanalysis. The range is defined by upper and lower bounds which default to +- 10% of the initial valueof the parameter. You can edit the default bound values to adjust the range.

To define a parameter as continuous, select the parameter in the Outline view for the DOE. Then, inthe Properties view, select Continuous as the Classification for the parameter. By default, the Value

cell of the Properties table is populated with the parameter value defined in the Parameters table.This value cannot be edited in the DOE unless the parameter is disabled (deselected in the Outline

view).

The values in the Lower Bound and the Upper Bound cells in the Properties view define the rangeof the analysis. To change the range, select the cell containing the value to be changed, type in anumber, and press Enter. You are not limited to entering integers.

When you define the range for a continuous input parameter, keep in mind that the relative variationmust be equal to or greater than 1e-10 in its current unit. If the relative variation is less than 1e-10, youcan either adjust the variation range or disable the parameter.

• To adjust the variation range, select the parameter in the Outline view and then edit the values in theLower Bound and Upper Bound cells of the Properties view.

• To disable the parameter, deselect the Enable check box in the Outline view.

Defining Manufacturable Values

A Manufacturable Values filter can be applied to continuous parameters to ensure that real-world man-ufacturing or production constraints are taken into account during postprocessing. When you applythe Manufacturable Values filter to a parameter, you can define two or more Levels that contain para-metric values that are feasible for manufacturing. Only these specified values will be used for analysisof the sample set—for example, Optimization will only return designs based on Manufacturable Values.

Note

The Manufacturable Values filter is currently only available for the Screening optimizationmethod.

To apply Manufacturable Values to a continuous parameter, select the parameter in the Outline viewfor the DOE. Then, in the Properties view, select the Use Manufacturable Values check box. By default:

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.188

Working with DesignXplorer

Page 197: Design Exploration Users Guide

• The Number of Levels cell in the Properties view is set to 2, the minimum allowable level. These levelsare displayed in the Table view.

• The Value cell of the Properties table is populated with the parameter value defined in the Parameters

table. This value cannot be edited in the DOE.

Defining the Range for Manufacturable Values

The values in the Lower Bound and the Upper Bound cells in the Properties view define the rangeof the postprocessing analysis. To change the range, select the cell containing the value to be changed,type in a number, and press Enter. You are not limited to entering integers.

When you define the range for a continuous input parameter, keep in mind that the relative variationmust be equal to or greater than 1e-10 in its current unit. If the relative variation is less than 1e-10, youcan either adjust the variation range or disable the parameter.

• To adjust the variation range, select the parameter in the Outline view and then edit the values in theLower Bound and Upper Bound cells of the Properties view.

• To disable the parameter, deselect the Enable check box in the Outline view.

Note

When you adjust the range, Manufacturable Values falling outside the new range are removedautomatically and all results are invalidated.

189Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Parameters

Page 198: Design Exploration Users Guide

Defining the Number of Levels for Manufacturable Values

The Number of Levels cannot be changed in the Properties view. To change the Number of Levels

for the parameter, you must add or delete a Level in the Table view. You can also edit the values forexisting Levels in the Table view.

• To add a Level, select the empty cell in the bottom row of the Manufacturable Values column, type ina number, and press Enter. The Number of Levels in the Properties view is updated automatically.

If you enter a value outside of the range defined in the Properties view, you can opt to either auto-matically extend the range to encompass the new value or cancel the commit of the new value. Ifyou opt to extend the range, all existing DOE results and design points will be deleted.

• To delete a Level, right-click on any part of the row containing the Level to be removed and select Delete

from the context menu. Note that when you delete the Level representing either the upper or the lowerbound, the range is not narrowed.

If you enter a value outside of the range defined in the Properties view, you can opt to either auto-matically extend the range to encompass the new value or cancel the commit of the new value. Ifyou opt to extend the range, all existing DOE results and design points will be deleted.

• To change the value of an existing Level, select the cell containing the value to be changed, type in aninteger, and press Enter.

For information on disabling and re-enabling the Use Manufacturable Values filter, see ChangingManufacturable Values (p. 191).

Changing Input Parameters

You can make changes to an input parameter in the first cell of the system (DOE, DOE (SSA), orParameters Correlation cell). In the Outline view of the tab, select the input parameter that you wantto edit. Most changes will be made using the Properties view for that parameter. Changes made forparameters will only affect the parameter information for the current system and any systems that sharea project cell (DOE, Response Surface) from that system.

• An input parameter may be selected or deselected for use in that system by checking the box to the rightof it in the DOE Outline view.

• In the Properties view, specify further attributes for the selected input parameter, such as a range (upperand lower bounds) of values for a continuous input parameter or values for a discrete input parameter.

When you make any of the following changes to an input parameter in the first cell of the system (DOE

or Parameters Correlation cell):

• enable or disable an input parameter

• change the Classification of an input parameter from continuous to discrete, or vice versa

• add or remove a Level of a discrete parameter

• change the range definition of an input parameter

All generated data associated with the system that contains the modified parameter will be cleared.This means for instance, for a GDO system, generated data in all cells of the GDO system will be cleared

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.190

Working with DesignXplorer

Page 199: Design Exploration Users Guide

if you make one of these changes in the DOE cell. A dialog box is displayed to confirm the parameterchange before clearing all generated data. The change is not made if you select No in the dialog box.

Note

Using the DOE Method, the number of generated design points in the DOE Matrix is directlyrelated to the number of selected input parameters. The design and analysis Workflow showsthat specifying many input parameters will make heavy demands on computer time and re-sources, including system analysis, DesignModeler geometry generation, and/or CAD systemgeneration. Also, large input parameter ranges may lead to inaccurate results.

Changing Manufacturable Values

For a continuous parameter with Manufacturable Values, you can enable or disable the ManufacturableValues filter by selecting or deselecting the Use Manufacturable Values check box in the Properties

view of the DOE.

• When you deselect the Use Manufacturable Values check box, the Levels will be removed from theTables view. The range will still display in the Properties view, but Manufacturable Values will not beused.

• When you reselect the Use Manufacturable Values check box, the Table view will display previous listof Levels.

Once results are generated, you can toggle back and forth between enabled and disabled without in-validating the DOE or Response Surface. You can also add, delete, or edit Manufacturable Values withoutneeding to regenerate the entire DOE and Response Surface, provided that you don’t alter the range.So long as the range has not been altered, the information from previous updates is reused.

The following types of results, however, are based on the Manufacturable Values you defined and mustbe regenerated after any changes to the Manufacturable Values:

• Response points and related charts

• Min/Max objects

• Optimization candidates and charts

Setting Up Design Variables

A range (upper and lower bounds) of values is required for all continuous input parameters. Also, valuesare required for discrete input parameters or continuous input parameters with Manufacturable Values.

Setting Up Uncertainty Variables

For uncertainty variables (used in a Six Sigma Analysis), you must specify the type of statistical distributionfunction used to describe its randomness as well as the parameters of the distribution function. For thedistribution type, you can select one of the following:

• Uniform

• Triangular

191Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Parameters

Page 200: Design Exploration Users Guide

• Normal

• Truncated Normal

• Lognormal

• Exponential

• Beta

• Weibull

For more information on the distribution types, see Distribution Functions (p. 263).

In the example below of a beam supporting a roof with a snow load, you could measure the snowheight on both ends of the beam 30 different times. Suppose the histograms from these measurementslook like the figures given below.

Figure 3: Histograms for the Snow Height H1 and H2

0.0E+00

1.0E-01

2.0E-01

3.0E-01

4.0E-01

5.0E-01

6.0E-01

4.07E+01

1.22E+02

2.03E+02

2.85E+02

3.66E+02

Snow Height H1

Rela

tive

Freq

uenc

y

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.192

Working with DesignXplorer

Page 201: Design Exploration Users Guide

0.0E+00

1.0E-01

2.0E-01

3.0E-01

4.0E-01

5.0E-01

6.0E-01

7.0E-01

8.0E-01

9.85E+01

2.95E+02

4.92E+02

6.89E+02

8.86E+02

1.08E+03

Snow Height H2

Rela

tive

Freq

uenc

y

From these histograms, you can conclude that an exponential distribution is suitable to describe thescatter of the snow height data for H1 and H2. Suppose from the measured data we can evaluate thatthe average snow height of H1 is 100 mm and the average snow height of H2 is 200 mm. The parameterλ can be directly derived by 1.0 divided by the mean value, which leads to λ1 = 1/100 = 0.01 for H1,

and λ1 = 1/200 = 0.005 for H2.

Output Parameters

Each output parameter corresponds to a response surface which is expressed as a function of the inputparameters. Some typical output parameters are: Equivalent Stress, Displacement, Maximum ShearStress, etc.

When viewing output parameters in the Outline of the various design exploration tabs (DOE, ResponseSurface, etc.), you will see maximum and minimum values for each output parameter in the Propertiesview when you click on the parameter cell in the Outline view. Please note that the maximum andminimum values that you see in the Properties view for a parameter depend on the state of the designexploration system.

The minimum and maximum values displayed as Properties of the outputs are the “best” min/maxvalues available in the context of the current cell; the best between what the current cell eventuallyproduced and what the parent cell provided. A DOE cell produces design points and a min/max canbe extracted from those. A Response Surface produces Min-Max Search results if this option is enabled.If a refinement is run, new points are generated and a best min/max can be extracted from those. AnOptimization produces a sample set and again, a better min/max than what is provided by the parentResponse Surface can be found in these samples. So please keep in mind the state of the design explor-ation system when viewing the minimum and maximum values in the parameter properties.

Working with Design Points

When a design exploration system is updated, it creates design points in the Parameter Set and requestsWorkbench to update them in order to obtain the corresponding output parameter values. These solveddesign points are the input data used by ANSYS DesignXplorer to calculate its own parametric results(e.g. response surface, sensitivities, optimum designs, etc.)

193Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Design Points

Page 202: Design Exploration Users Guide

The number and the definition of the design points created depend on the number of input parametersand the properties of the design exploration system DOE. See Using a Central Composite DesignDOE (p. 71) for more information.

You can preview the generated design points by clicking the corresponding Preview button toolbarbefore updating a Design of Experiments, a Response Surface (if performing a refinement), or anotherdesign exploration feature that generates design points during update.

Some design exploration features provide the ability to edit the list of points, and even to edit theoutput parameter values of the points. See Working with Tables (p. 205) for more information.

Before updating design points, you can specify the following details about the update process:

• The order in which the design points will be updated. See Design Point Update Order (p. 195) for moreinformation.

• The location in which the update operation will be processed. See Design Point Update Location (p. 195)for more information.

You can update a design exploration system or cell in several ways:

• In the corresponding tab, click the Update button in the toolbar.

• In the corresponding tab, right-click the component node in the Outline view and select Update fromthe context menu.

• From the Project Schematic, right-click on the cell and select Update from the context menu.

• In the Project Schematic, click the Update Project button in the toolbar to update all of the systems inthe project.

Note

The Update All Design Points button in the Project Schematic toolbar is only intended toupdate the design points listed in the Parameter Set’s Table of Design Points. This operationdoes not update the points listed in the design exploration tables.

During the update operation, the design points are updated simultaneously if the analysis system isconfigured to perform simultaneous solutions; otherwise they are updated sequentially.

When the Design of Experiments, Response Surface, or Parameters Correlation component is updated,the Table of Design Points is updated dynamically: the generated points appear and their results aredisplayed as the points are solved.

As each design point is updated, its parameter values are written to a CSV log file that can later beimported back into the Table of Design Points in order for you to use the data to continue your workwith the Response Surface. For more information, see Design Point Log Files (p. 198).

Related Topics:

Design Point Update OrderDesign Point Update LocationPreserving Generated Design Points to the Parameter SetExporting Generated Design Points to Separate ProjectsInserting Design Points

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.194

Working with DesignXplorer

Page 203: Design Exploration Users Guide

Cache of Design Point ResultsRaw Optimization DataDesign Point Log FilesFailed Design Points

Design Point Update Order

You can specify the default order in which design points will be updated, either starting from DP0 (thedefault) or from the previous design point. To set the default, right-click on the Parameter Set bar,select Properties, and in the Properties view select a value for the Design Point Update Order setting.

You can also specify in which order the design points are submitted for update to Workbench so thatthe efficiency of the operation is improved. By default, DesignXplorer is already optimizing the orderfor DOEs that it creates, but you can modify it in different ways via the Table of Design Points:

• By editing the values in the Update Order column. If this column is not visible, right-mouse click in thetable and select Show Update Order to make it visible.

• By sorting the table by one or several columns and then right-mouse clicking in the table and selectingSet Update Order by Row. This option will regenerate the Update Order for each design point to matchthe sorting of the table. Note that sorting by column is not available when the table view contains severalsheets of data.

• Automatically, by right-mouse clicking in the table and selecting Optimize Update Order. This optionwill analyze the parameter dependencies in the project and scan the parameter values across all designpoints in order to determine an optimal order of update. This operation modifies the Update Order valuefor each design point and refreshes the table.

Before each design point update is performed, an alert will be displayed if the design point update orderhas been changed from the recommended setting. If you do not want to be shown this message beforeeach design point update for which the recommended design point update order has been modified,you can disable it in the Design Exploration Messages section of the Workbench Options dialog (accessedvia Tools > Options > Design Exploration).

See Design Point Update Order in the Workbench User's Guide for more information.

Note

The primary goal of Optimize Update Order is to reduce the number of geometry and meshsystem updates. In general, the optimized order is driven by the order of the update tasks,which means that the components will be updated depending on their order in the system,their data dependencies, their parameter dependencies and the state of the parameters. Thisworks best in a horizontal schematic with geometry as the first separate system. In a singlevertically integrated system project, the parameters defined in the upper components willbe considered the most important. In some cases, because of the order of parameter creationor the presence of engineering data parameters, you may find that the first columns are notgeometry parameters. In this case, we recommend that you sort by the geometry parametersinstead using the Optimize Update Order option.

Design Point Update Location

In a new project, by default the Design Point Update Process settings are populated with the globalWorkbench solution preferences defined in the Tools>Options>Solution Process dialog. You can

195Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Design Points

Page 204: Design Exploration Users Guide

specify where a design point update will be processed, indicating the location in which the updateoperation will be run; this location can differ from the update location of solution cells.

To specify the update location for design points:

1. Right-click the Parameter Set bar.

2. Select Properties.

3. Under Design Point Update Process in the Properties view, select a value from the Update Option

drop-down. Available values are:

• Run in Foreground: The update is run in the current ANSYS Workbench session.

• Submit to Remote Solve Manager: The update is submitted to Remote Solve Manager (RSM), a job-queuing system that distributes tasks to available computing resources. The cell being updated entersa Pending state. For more information, see Working with Remote Solve Manager andDesignXplorer (p. 208).

For more detailed information about available Design Point Update Option values, see Solution Processin the Workbench User's Guide.

By following these steps, you can change the location for design point updates at any time. Becauseyou can easily edit this setting at the project-level, you can submit successive design point updates todifferent computing resources as needed.

Preserving Generated Design Points to the Parameter Set

During the update of a design exploration system, the generated design points are temporarily createdin the project, and listed in the Table of Design Points of the Parameter Set bar. Once the operationis completed, each generated design point is removed from the project unless it has failed to update.In this case it is preserved to facilitate further investigation.

DesignXplorer allows you to preserve generated design points so they will be automatically saved tothe Parameter Set for later exploration or reuse. The preservation of design points must be enabledfirst at the project level, and then can be configured for individual components.

• To enable this functionality at the project level, go to Tools > Options and select Design Exploration.Under Design Points, select the Preserve Design Points After DX Run check box, indicating thatdesign points should be saved. When you opt to preserve the design points, the design points willbe available in the Parameter Set.

• To enable this functionality at the component level, right-click on the component and select Edit. Inthe Properties view under Design Points, select the Preserve Design Points After DX Run checkbox.

When design points have been preserved, they are included in the Parameter Set Table of Design

Points. Any DesignXplorer points that display in any of the component Table views (DOE points, refine-ment points, direct correlation points, and candidate points, etc.) indicate whether they correspond toa design point in the Parameter Set Table of Design Points (i.e., whether they share the same inputparameter values). When a correspondence exists, the point’s Name specifies the design point to whichit is related. If the source design point is deleted from the Parameter Set or the definition of eitherdesign point is changed, the link between the two points is broken (without invalidating your modelor results) and the indicator is removed from the point’s name.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.196

Working with DesignXplorer

Page 205: Design Exploration Users Guide

Exporting Generated Design Points to Separate Projects

In addition to preserving the design points in the project, you can also export the preserved designpoints from the project. If you check the Export Project for Each Preserved Design Point option inthe Properties view under the Preserve Design Points After DX Run property, in addition to savingthe design points to the project Table of Design Points, each of the design exploration design pointsis exported as a separate project in the same directory as the original project.

Note

If you run a design exploration analysis and export design points, and then you uncheck theoption to export them, any design points that were marked for export and are reused inensuing analyses will still be exported, although newly created design points will not be ex-ported.

Inserting Design Points

Design points can be added into the project using the Insert as Design Point(s) option from the right-click menu on the rows of the Table view, or from various charts. Some of the different charts andtables that allow this operation are: Optimization candidate table, Response Surface Min-Max Search

table, Samples chart, Tradeoff chart, Response chart, and Correlation Scatter chart.

Right-click on one or more table rows or a chart point and select Insert as Design Point(s) from thecontext menu. Insert as Design Point(s) creates new design points at the project level by copying theinput parameter values of the selected Candidate Design(s). The output parameter values are not copiedbecause they are approximated values provided by the Response Surface. Edit the Parameter Set inthe Project Schematic view to view the existing design points.

The above insertion operations are available from Optimization charts, depending on the context. Forinstance, it is possible to right-click on a point in a Tradeoff chart in order to insert the correspondingsample as a design point via the context menu. The same operation is available from a Samples chart,unless it is being displayed as a Spider chart.

Note

If your cell is out of date, the information (charts and tables) you see is out of date, but youcan still manipulate those charts and tables with the old data (change the navigator to exploredifferent response points) and insert design points.

Cache of Design Point Results

To reduce the total time required to perform parametric analyses in a project, ANSYS DesignXplorerstores the design points that it successfully updated in its design points cache.

As a consequence, if the same design points are reused when previewing or updating a design explor-ation system, they immediately show up as “up-to-date” and the cached output parameter values aredisplayed.

The cached data are invalidated automatically when relevant data changes occur in the project. However,you can also force the cache to be cleared right-clicking on the white space of the Project Schematic

and selecting Clear Design Points Cache for All Design Exploration systems from the context menu.

197Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Design Points

Page 206: Design Exploration Users Guide

Raw Optimization Data

DesignXplorer allows you to monitor design point data for Direct Optimizations, both while the optim-ization is in process and after it has completed.

While a Direct Optimization system is updating, the design point data calculated by DesignXplorer isdisplayed in the Table view. Design points pulled from the cache are also displayed. In most cases, theTable view is refreshed dynamically as design points are submitted for update and as they are updated.If the design points are updated via Remote Solve Manager, however, the results are available and theTable view is refreshed only when the RSM job is completed.

Once the Direct Optimization is completed, the raw design point data is saved. To display this data inthe Table view, select the Raw Optimization Data node in the Outline view. The data is not editable,but you can export it to a CSV file by right-clicking inside the table and selecting the Export Data

context option (for details, see Extended CSV File Format (p. 281)). Once the table data is exported, youcan then import it as a custom DOE.

You can also right-click on one or more cells within the table data and select one of the following options:

• Insert as Design Point: Creates a new design point at the project level by copying the input para-meter values of the selected candidate point(s). The output parameter values are not copied becausethey are approximated values provided by the Response Surface.

• Insert as Custom Candidate Point : Creates new custom candidate points into the Table of Candidate

Points by copying the input parameter values of the selected candidate point(s).

Note

The design point data is in raw format, which means it is displayed without analysis or op-timization results and so does not show feasibility, ratings, Pareto fronts, etc.

Design Point Log Files

DesignXplorer saves design point update data. As each design point is solved, DesignXplorer immediatelywrites its full definition (i.e., both input and output parameter values) to a design point log file. Thisoccurs when you do any of the following things:

• Update the Current design point by right-clicking it and selecting Update Selected Design Point fromthe right-click context menu.

Note

The Update Project operation is processed differently than a design point update, sowhen you update the whole project, design point data is not logged. You must use theright-click context menu to log data on the Current design point.

• Update either selected or all design points via the Update Selected Design Point option of the right-click context menu.

• Update a design exploration system such as Design of Experiments.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.198

Working with DesignXplorer

Page 207: Design Exploration Users Guide

• Open an existing project. Data for all of the up-to-date design points in the project is immediately loggedto the file.

Formatting

The generated log file is in the “Extended CSV File Format” used by ANSYS DesignXplorer to export tableand chart data and to import data from external CSV files to create new design, refinement, and verific-ation points. It is primarily formatted according to the Comma-Separated Values standard (file extensionCSV), but also supports several additional non-standard formatting conventions. For details, see ExtendedCSV File Format (p. 281).

File Location

The log file is named DesignPointLog.csv and is written to the user_files directory of theWorkbench project. You can locate the file in the Files view of Workbench via the View > Files menuoption.

Importing the File

Since the design point log file is in the Extended CSV File Format used elsewhere by DesignXplorer, youcan import the design point data back into the Table of Design Points in the Design of Experimentscomponent of any design exploration system.

Note

In order to import data from the design point log file:

• Set the DOE type to Custom.

• The list of parameters in the file must exactly match the order and parameter names (i.e.,P1, P7, P3, etc.) in DesignXplorer. To import the log file, you may need to manually extracta portion, using the process described below.

Manually Extracting Part of the Log File

1. Identify the column order and parameter names in used by DesignXplorer. You can do this by either ofthe following methods:

• Review the column order and parameter names in the header row of the Table view.

• Export the first row of your custom DOE to create a file with the correct order and parameter namesfor the header row.

2. Find DesignPointLog.csv in the user_files directory of the Project Directory and compare itto your exported DOE file. Verify that the column order and parameter names exactly match those inDesignXplorer.

3. If necessary, update the column order and parameter names in the design point log file.

199Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Design Points

Page 208: Design Exploration Users Guide

4. If parameters were added or removed from the project, the file will have several blocks of data to reflectthis that are distinguished by header lines. Manually remove unnecessary sections from the file, keepingonly the block of data that is consistent with your current parameters.

Note

The header line is produced when the log file is initially created and reproducedwhenever a parameter is added or removed from the project. If parameters have beenadded or removed, you will need to reverify the match between DesignXplorer and thelog file header row.

Importing the Log File into the Table of Design Points

1. In the Design of Experiments tab, verify that the Design of Experiments Type property is set to Custom.

2. Right-click on any cell in the Table of Design Points and select Import Design Points from the contextmenu.

3. Browse to the user_files directory of the Project Directory and select the DesignPointLog.csvfile.

The design point data will be loaded into the Table of Design Points.

4. Update the DOE.

Failed Design Points

This section deals with failed design points. It includes recommendations on how to prevent designpoint failures from occurring, instructions on preserving design point data for future troubleshooting,and various methods that you can use to deal with design points once they have failed.

The following topics are discussed in this section:Preventing Design Point Update FailuresPreserving Design Points and FilesHandling Failed Design Points

Preventing Design Point Update Failures

The most effective way to deal with design points that fail to update is to take a proactive approach,preventing the failures from occurring in the first place. Common causes of failed design points includelicensing issues, IT issues (such as networking or hardware issues), a lack of robustness in the geo-metry/mesh and solver, and parametric problems or conflicts. This section offers recommendations tohelp you increase the probability of a successful design point update.

Minimize Potential Licensing and IT Issues

Before doing a design point update, you can minimize design point failures by addressing issues thatare not directly related to your project or its contents, but that could cause problems during the update.

Check your network connections.

Check your network connections to verify that everything is in order.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.200

Working with DesignXplorer

Page 209: Design Exploration Users Guide

Set Computer Power options to “always on.”

In your Computer Power options, configure your computer to remain turned on; set options for thetime after which to turn off the hard disk, sleep, and hibernate to Never. (This can avoid update issuesthat may be caused by the computer timing out or shutting down during an update.)

Verify that licenses are available.

If you’re using CAD, in the Workbench interface go to Tools > Options, select Geometry Import, andset CAD Licensing to Hold (the default setting is Release). This places a hold on the license currentlybeing used, so that it will be available for the update.

Leave the modeling application open.

Don’t close your modeling application while you are running DesignXplorer. For example, using the“Reader” mode in CAD can create update issues because the editors are closed by default.

Restart ANSYS Mechanical or ANSYS Meshing.

By default, the ANSYS Mechanical and ANSYS Meshing applications will restart after each (one) designpoint. This default value lengthens the overall processing time, but will improve overall system perform-ance (memory and CPU) when the generation steps of each design point (geometry, mesh, solve, postprocessing) are lengthy.

If overall system performance is not a concern, you can reduce the overall processing time by dir-ecting the applications to restart less frequently or not at all. In the Workbench interface, go toTools > Options and select Mechanical or Meshing. Under Design Points:

• To restart less frequently, set the number of design points to update before restarting the applicationto a higher value, such as 10.

• To prevent restarts completely, deselect the check box for During a design point update, periodically

restart the (Mechanical or Meshing) application.

Create a Robust Model

To create a robust model, review both the parameterization of your model (for instance, CAD or ANSYSDesign Modeler) and your DesignXplorer settings.

Keep only what is being used.

For example, only expose the parameters that you are actively optimizing. Turn off surfaces that aren’tbeing used, coordinate system imports, etc.

Identify and correct potential parametric conflicts.

While design exploration provides some safeguards against parametric conflicts and poorly definedproblems, it cannot identify and eliminate all potential issues. Give careful consideration to your para-metric setup, including factors such as ranges of variation, to make sure that the setup is reasonableand well-defined.

Check your mesh.

If you have geometry parameters, double-check your mesh so as to avoid any simulation errors thatcould be caused by meshing. Pay particular attention to local refinement and global quality. If you areusing ANSYS Mechanical or Fluent, use the Morphing option when possible.

Use the Box-Behnken DOE type.

Consider using the Box-Behnken Design of Experiments type if your project has parametric extremes(for example, has extreme parameter values in corners that are difficult to build) and has 12 or fewer

201Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Design Points

Page 210: Design Exploration Users Guide

continuous input parameters. Since the Box-Behnken DOE doesn’t have corners and does not combineparametric extremes, it can reduce the risk of update failures.

Test the Robustness of the Model

Before you submit multiple (or all) design points for update, test the model and verify the robustnessof your design.

Verify model parameters by running a test project.

Design points often fail because the model parameters cannot be regenerated. To test the model para-meters, try creating a project with a test load and coarse mesh that will run quickly. Solve the test projectto verify the validity of the parameters.

Verify model parameters by submitting a Geometry-Only update to Remote Solve Manager.

Starting in ANSYS version 14.0, you have the ability submit a Geometry-Only update for all design pointsto DesignXplorer. In the Parameter Set Properties view, set Update Option to Submit to Remote Solve

Manager and set Pre-RSM Foreground Update to Geometry. When you submit the next update, thegeometry is updated first, so any geometry failures will be found sooner.

If necessary, reparameterize the model.

Occasionally, all of your parameters appear to be feasible, but the model still fails to update due to issuesin the history-based CAD tool. In this case, you can reparameterize the model in ANSYS Space Claim ora similar “direct modeling” application.

Verify that the Current design point updates successfully.

Try to update the Current design point before submitting a full design point update. If the update fails,you can open the project in an ANSYS application to further investigate that design point. See HandlingFailed Design Points (p. 203) for more information.

Attempt to update a few “extreme” design points.

Select a few design points with extreme values (for instance, the smallest gap with the largest radius)and try to update them. In this way, you can assess the robustness of your design before committingto a full correlation.

Preserving Design Points and Files

By default, when a design point other than the Current design point fails, DesignXplorer saves thedesign point but does not retain any of its files; calculated data is saved on disk only for the Currentdesign point. Before performing a full update you may want to configure DesignXplorer to save designpoints and related files. This ensures that after the update, you’ll have materials to further investigateany update problems that might occur.

The preservation of design points and related files must be enabled first at the project level, and thencan be configured for individual components.

• To enable this functionality at the project level, go to Tools > Options and select Design Exploration.Under Design Points, select both the Preserve Design Points After DX Run and the Export Project for

Each Preserved Design Point check boxes, indicating that both design points and their files should besaved. When you opt to preserve the design points, the design points will be available in the Parameter

Set. When you also opt to retain the files, each of the design points will be saved as a separate projectin the same directory as the original project. You can later open these projects and use them to investigateany design points that failed to update.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.202

Working with DesignXplorer

Page 211: Design Exploration Users Guide

• To configure this functionality at the component level, right-click on the component and select Edit. Inthe Properties view under Design Points, select both the Preserve Design Points After DX Run andExport Project for Each Preserved Design Point check boxes. (This is the same as selecting the Export

option for a parameter in the Parameter Set.)

Note

• Preserved design points are still part of your original project, so the results of your investigationson these design points in the separate project will also be updated to the project.

• Since every design point is saved as a project, this approach can impact performance and diskresources when used on projects with larger numbers (> 100) of design points.

Handling Failed Design Points

This section contains information and suggestions on investigating and handling design points oncethey’ve failed. The strategy you use depends on how many design points failed and the nature of thefailure.

Perform Preliminary Troubleshooting Steps

Review error messages.

The first step in gathering information about the update issue is to review error messages. InDesignXplorer, a failed design point is considered to be completely failed. In reality, though, may be apartial failure, where the design point failed to update for only a single output. As such, you need toknow which output is related to the failure.

You can find this information by hovering your mouse over a failed design point in the Table of

Design Points; a primary error message will tell you what output parameters have failed for thedesign point and specify the name of the first failed component. Additional information may alsobe available in the Messages view.

Reattempt to Update All design points.

Sometimes a design point update fails simply because of a short-term issue (such as a license or CPUresources being temporarily unavailable), and the issue may actually be remedied in the time betweenyour first update attempt and your second one. Try again to update all design points before proceedingfurther.

Before initiating a design point update, you can also set the Retry Failed Design Points option toautomatically retry the update for failed design points. This option is available for all DesignXplorercomponents except for Six Sigma Analysis and a Parameters Correlation that is linked to a responsesurface. When this option is selected, DesignXplorer will make additional attempts to solve all ofthe design points that failed during the previous run. You can specify both the number of timesthe update should be retried and the delay in seconds between each attempt.

• To set this option on a global level, select Tools > Options in Workbench to open the Options dialog.Open the Design Exploration item in the tree and select the Retry Failed Design Points check box.Specify your preferences in the Number of Retries and Retry Delay fields that are displayed.

• To set this option at the project level, open the tab for the component. In the Properties view, setthe Number of Retries property to the desired number (a value of 0 disables the option) and entera value for the Retry Delay property.

203Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Design Points

Page 212: Design Exploration Users Guide

Address licensing and IT issues.

Error messages may indicate that the failure was caused by factors external to DesignXplorer, such asnetwork, licensing, or hardware issues. For example, the update may have failed because a license wasnot available when it was needed, or because there was a problem with your network connectivity. Ifthe issue isn’t remedied by your second attempt to update, address the issue in question and then retrythe update.

Create a Duplicate Test Project

When preparing to investigate one or more failed design points, keep in mind that editing the currentproject during your investigation could invalidate other design points and your design explorationresults. This also applies to any separate design point projects created when you preserve design pointsand files; these remain part of the original project, and can affect other design points and your designexploration results.

To avoid potentially invalidating the original project, it’s recommended that you create a duplicate testproject via the Save As menu option. You can either duplicate the entire project, or to focus your ex-amination on a particular failed design point, can duplicate one of the individual projects created forpreserved design points.

Investigate the Failed Design Points

Copy the Failed Design Point to Current.

By default, with a design point update, ANSYS Workbench retains calculated data for only the Currentdesign point. To investigate a failed design point, you can right-click on the point in the Parameter SetTable of Design Points and select Copy Inputs to Current from the context menu. With the values ofthe failed design point set to Current, you can then open the project in an editor to troubleshoot theupdate issue. This approach works well when only a single design point has failed.

For more detailed information on copying design points to Current, see Activating and ExportingDesign Points.

Mark Failed Design Points for Export.

When several design points have failed, you can save the calculated data for these points by markingthem for export. In the Parameter Set Table of Design Points, select the Export check box for each ofthe failed design points. With the next update, a separate project will be created for exported designpoint. As with a project created by preserving design points and retaining their files, the new projectcan be opened and used to further investigate reasons for the design point’s failure to update.

For more detailed information on exporting design points, see Activating and Exporting DesignPoints.

Working with Sensitivities

The sensitivity plots available in design exploration allow you to understand how sensitive the outputparameters are to the input parameters. This understanding can help you innovate toward a more reliableand better quality design, or to save money in the manufacturing process while maintaining the reliab-ility and quality of your product. You can request a sensitivity plot for any output parameter in yourmodel.

The sensitivities available under the Six Sigma Analysis and the Goal Driven Optimization views arestatistical sensitivities. Statistical sensitivities are global sensitivities, whereas the single parametersensitivities available under the Responses view are local sensitivities.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.204

Working with DesignXplorer

Page 213: Design Exploration Users Guide

The global, statistical sensitivities are based on a correlation analysis using the generated sample points,which are located throughout the entire space of input parameters.

The local parameter sensitivities are based on the difference between the minimum and maximumvalue obtained by varying one input parameter while holding all other input parameters constant. Assuch, the values obtained for local parameter sensitivities depends on the values of the input parametersthat are held constant.

Global, statistical sensitivities do not depend on the values of the input parameters, because all possiblevalues for the input parameters are already taken into account when determining the sensitivities.

Global Sensitivity Chart Limitations

• Sensitivity charts are not available if ALL input parameters are discrete.

• Sensitivities are calculated for continuous parameters only.

Related Topics:

Using Local Sensitivity Charts (p. 112)Using the Sensitivities Chart (GDO) (p. 175)Statistical Sensitivities in a Six Sigma Analysis (p. 270)

Working with Tables

The Table view is used to display tabular data in one or several tables. Depending on the context, thetables are read-only and filled automatically by ANSYS DesignXplorer, or they are partially or completelyeditable.

The background color of a cell indicates if it is editable or not:

• A gray background indicates a read-only cell

• A white background indicates an editable cell

You can add new rows by entering values in the * row of the table. You enter values in the inputparameter columns. Once you have entered a value in one column in the * row, the row is added tothe table and the values for the remaining input parameters will be set to the initial values of theparameters. You can then edit that row in the table and change any of the other input parameter valuesif needed. Output parameter values will be calculated when the component is updated.

Output parameter values calculated from a simulation (a design point update) are displayed in blacktext. Output parameter values calculated from a response surface are displayed in the custom colorspecified in the Options dialog. For details, see Response Surface Options (p. 23).

Related Topics:

Viewing Design Points in the Table ViewEditable Output Parameter ValuesCopy/PasteExporting Table DataImporting Data from a CSV File

205Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Tables

Page 214: Design Exploration Users Guide

Viewing Design Points in the Table View

Points shown in the DesignXplorer component Table views (DOE points, refinement points, direct cor-relation points, and candidate points, etc.) can correspond to design points in the Parameter Set Table

of Design Points (i.e., they share the same input parameter values). When a correspondence exists, thepoint’s Name specifies the design point to which it is related.

If the source design point is deleted from the Parameter Set or the definition of either design point ischanged, the link between the two points is broken (without invalidating your model or results) andthe indicator is removed from the point’s name.

Editable Output Parameter Values

You can change the edition mode of the tables of design points, refinement points and verificationpoints (Design of Experiments and Response Surface components) in order to edit the output parametervalues. Although this is normally not necessary since the output values are provided by a real solve, itis a useful alternative to insert existing results from external sources such as experimental data or knowndesigns.

The table can be in one of two modes: All Outputs Calculated, or All Outputs Editable. The defaultmode for the table is All Outputs Calculated. To change table modes, right-click on the table and selectthe desired mode. The mode behavior is as follows:

• All Outputs Calculated allows you to enter values for any or all of the inputs in a row, and the outputsfor that row will be calculated when the DOE is updated. In this mode, individual rows in the tablecan be overridden so that the outputs are editable. To override a row, right-click on it and select Set

Output Values as Editable.

• All Outputs Editable allows you to enter values for any or all of the inputs and outputs in a row. Inthis mode, individual rows in the table can be overridden so that the outputs are calculated. Tooverride a row, right-click on it and select Set Output Values as Calculated. If a row is changed from

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.206

Working with DesignXplorer

Page 215: Design Exploration Users Guide

editable to calculated, it invalidates the DOE, requiring an Update. Rows with editable outputs arenot computed during an Update of the DOE.

• If the table is in All Outputs Editable mode, you can enter values in the input or output parametercolumns. If you enter a value in an input column in the * row, the row is added to the table and thevalues for the remaining input parameters will be set to the initial values of the parameters. Theoutput columns will be blank and you must enter values in all columns before updating the DOE. Ifyou enter a value in an output column in the * row, the row is added to the table and the values forall input parameters will be set to the initial values of the parameters. The remaining output columnswill be blank and you must enter values in all columns before updating the DOE.

Note

• The table can contain derived parameters. Derived parameters are always calculated by thesystem, even if the table mode is All Output Values Editable.

• Editing output values for a row changes the component’s state to UpdateRequired. Thecomponent will need to be updated, even though no calculations are done.

• If the points are solved and you change the table mode to All Output Values Editable andthen change it back to All Output Values Calculated without making any changes, the outputswill be marked Out of Date and you must Update the component. The points will be recalcu-lated.

Copy/Paste

You can copy/paste in the table (to/from a spreadsheet, text file, or design point table, for example).The copy/paste adapts to the current state of the table, pasting only inputs or inputs and outputs ifoutputs are editable.

To manipulate large number of rows, it is recommended to use the export and import capabilities de-scribed in the following sections.

Exporting Table Data

Each table can be exported as a CSV file. Right-click on a cell and select the Export Data menu entry,enter a filename and export. The contents of the complete table is exported to the selected file.

For details, see Extended CSV File Format (p. 281).

Importing Data from a CSV File

Importing data from an external file is, for instance, a way to setup a specific Design of Experimentsscheme from an external tool or to use existing results as verification points. The imported data cancontain input parameter values and, optionally, output parameter values as well.

The import operation is available in the contextual menu when you right-click in a table, on an outlinenode or a Design Exploration component which implements the import feature. For instance, you canright-click on the Table of Design Points in a Design of Experiments tab and select Import Design

Points, or you can right-click on a Response Surface component from the Project Schematic view andselect Import Verification Points.

207Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Tables

Page 216: Design Exploration Users Guide

As an example, to import design point values from an external CSV file to a Design of Experimentscomponent, you need to follow these steps:

1. From the Project Schematic, right-click on the Design of Experiments cell where you want to importthe design points values and select the Import Design Points menu option.

2. Select an existing CSV file and click Open.

3. A question box asks if you want the input parameter variation ranges to be extended if required.Answer yes to change the upper and lower bounds of the parameters as required, otherwise thevalues falling outside of the defined ranges will be ignored by the import operation. You may alsocancel the import action.

The CSV file is parsed and validated first. If the format is not valid or the described parameters are notconsistent with the current project, a detailed list of the errors is displayed and the import operationis aborted. Then, if the file is validated, the data import is performed.

The main rules driving the file import are the following:

• The file must conform to the extended CSV file format as defined in Extended CSV File Format. Inparticular, a header line where each parameter is identified by its ID (P1, P2, …, Pn) is mandatory todescribe each column.

• The order of the parameters in the file may differ from the order of the parameters in the project.

• Disabled input parameters must not be included.

• Values must be provided in the units “As Defined” (menu Units > Display Values As Defined).

• If output parameter values are provided, they must be provided for all of the output parameters butthe derived output parameters. If values are provided for derived output parameters, they will beignored.

• Even if the header line states that output parameter values are provided, it is possible to omit themon a data line.

Working with Remote Solve Manager and DesignXplorer

The Remote Solve Manager (RSM) is a job queuing and resource management system that distributestasks to available local and remote computing resources. When you submit a job to RSM, it can eitherbe queued to run in the background of the local machine or to one or more remote machines. Formore information on RSM, see the Remote Solve Manager User's Guide.

In order to send updates via RSM, you must first install and configure RSM. For more information, seethe installation tutorials posted on the Downloads page of the ANSYS Customer Portal. For further in-formation about tutorials and documentation on the ANSYS Customer Portal, go to http://support.an-sys.com/docinfo.

Submitting Design Point Updates to RSM from DesignXplorer

You can configure DesignXplorer to send design point updates to RSM. This configuration can be dif-ferent than the configuration for Solution cell updates, so it is possible to send design point updatesand Solution cell updates to different locations for processing. For example, design point updates mightbe submitted to RSM for remote processing while the Solution cell update is run in the background

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.208

Working with DesignXplorer

Page 217: Design Exploration Users Guide

of the local machine, or the Solution cell update might be submitted to RSM while the design pointupdates are run in the foreground of the local machine.

It is possible to change the location of individual design point updates at any time, so you can sendsuccessive design point updates to different resources within the same session. For example, if oneupdate is currently running in the foreground of your local machine, you can send a different updateto RSM for remote processing.

For more information on submitting design points for remote update, see Updating Design Points viaRemote Solve Manager (RSM) in the ANSYS Workbench Customization Toolkit.

For more information on configuring the Solution cell update location, see Submitting Solutions forLocal, Background, and Remote Solve Manager Processes in the Workbench User's Guide.

For more information on configuring design point update locations, see Design Point Update Loca-tion (p. 195) in the Design Exploration User's Guide.

Previewing Updates

When a Design of Experiments, Response Surface, or Parameters Correlation system cell requiresan update, in some cases you can preview the results of the update. A preview prepares the data anddisplays it without updating the design points. This allows you to experiment with different settingsand options before actually solving a DOE matrix or generating a Response Surface or list of ParametersCorrelation samples.

• On the Project Schematic, you can either right-click the cell to be updated and select Preview from thecontext menu, or select the cell and click the enabled Preview button in the toolbar.

• In the tab for the cell (accessed by right-clicking the cell and selecting Edit), you can either right-click onthe root node in the Outline view and select Preview or select the node and click the Preview buttonin the toolbar.

Pending State

Once a remote update for a Design of Experiments, Response Surface, or Parameters Correlation

cell has begun, the cell enters a Pending state that allows some degree of interaction with the project.While the cell is in a Pending state, you can:

• Access the Tools > Options dialog, access the Parameter Set Properties view, or archive the project.

• Follow the overall process of an update in the Progress view by clicking on the Show Progress buttonat the bottom right corner of the window.

• Interrupt, abort, or cancel the update by clicking the red “x” icon in the corner of the Progress view andthen clicking the associated button.

• Follow the process of individual design point updates in the Table view by right-clicking the cell beingupdated and selecting Edit. Each time a design point is updated, results and status are displayed in theTable view

• Exit the project and either create a new project or open another existing project. If you exit the projectwhile it is in a Pending state due to a remote design point update, you can later reopen it and click theResume button in the toolbar to resume the update.

209Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Remote Solve Manager and DesignXplorer

Page 218: Design Exploration Users Guide

You can exit a project while a design point update or a Solution cell via RSM is still in progress. Forinformation on behavior when exiting during an update, Updating Design Points via Remote SolveManager (RSM) or Exiting a Project during an RSM Solution Cell Update in the Workbench User'sGuide.

Note

For iterative update processes such as Kriging refinement, Sparse Grid Response Surfaceupdates, and Correlation with Auto-Stop, design point updates are sent to RSM iteratively;the calculations of each iteration are based on the convergence results of the previous it-eration. As a result, if you exit Workbench during the Pending state for an iterative process,when you reopen the project, only the current iteration will have completed.

For more detailed information on the Pending state, see Updating Design Points via RSM or ProjectSchematic in the Workbench User's Guide.

Note

The Pending state is not available for updates that contain verification points or candidatepoints. You can still submit these updates to RSM, but you will not receive intermediateresults during the update process and you cannot exit the project. Once the update is sub-mitted via RSM, you must wait for the update to complete and results to be returned fromthe remote server.

Working with Design Exploration Project Reports

The DesignXplorer project report enables you to generate a report containing the results of your designexploration project, providing a visual “snapshot” of your Response Surface, Goal Driven Optimization,Parameters Correlation, or Six Sigma Analysis study. The contents and organization of the report reflectthe layout of the Project Schematic and the current state of the project.

To generate a report for a project, select the File > Export Report menu option. Once you have gener-ated a project report, you can then save it and edit its contents as needed.

The project report for each type of design exploration analysis system contains a summary; separatesections for global-level, system-level, component-level project information; and appendices.

Project Summary The project summary includes the project name, date and time created, and productversion.

Global Project Information The section containing global project information includes a graphic ofthe Project Schematic and tables corresponding to the Files, Outline of All Parameters, Design

Points, and Properties views.

System Information The Project Report contains a section for each of the analysis systems in theproject. For example, if your project contains both a Goal Driven Optimization system and a Six SigmaAnalysis, the report will include a section for each of them.

Component Information For each analysis system section, the project report contains subsectionsthat correspond to the cells in that type of system. For example, a Goal Driven Optimization systemincludes a Design of Experiments, a Response Surface, and an Optimization component, and so will

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.210

Working with DesignXplorer

Page 219: Design Exploration Users Guide

have a subsections that correspond to each; a Parameters Correlation system includes only a Correlation

component, so the Parameters Correlation section will include only a Correlation subsection. Subsec-tions contain project data such as parameter or model properties and charts.

Appendices The appendices contain matrices, tables, and other types of information related to theproject.

For more information on working with Project Reports, see Working with Project Reports in the Work-bench User's Guide.

211Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Working with Design Exploration Project Reports

Page 220: Design Exploration Users Guide

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.212

Page 221: Design Exploration Users Guide

DesignXplorer Theory

When performing a design exploration, a theoretical understanding of the methods available is beneficial.The underlying theory of the methods is categorized as follows:

Understanding Response Surfaces (p. 213): Use to build a continuous function of output versusinput parameters.Understanding Goal Driven Optimization (p. 224): Use to create an optimization based on re-sponse surface evaluations (Response Surface Optimization) or real solves (Direct Optimization).Understanding Six Sigma Analysis (p. 257): Use to run a statistical analysis to quantify reliabil-ity/quality of a design.

Understanding Response Surfaces

In a process of engineering design, it is very important to understand what and/or how many inputvariables are contributing factors to the output variables of interest. It is a lengthy process before aconclusion can be made as to which input variables play a role in influencing, and how, the outputvariables. Designed experiments help revolutionize the lengthy process of costly and time-consumingtrial-and-error search to a powerful and cost-effective (in terms of computational time) statisticalmethod.

A very simple designed experiment is screening design. In this design, a permutation of lower and upperlimits (two levels) of each input variable (factor) is considered to study their effect to the output variableof interest. While this design is simple and popular in industrial experimentations, it only provides alinear effect, if any, between the input variables and output variables. Furthermore, effect of interactionof any two input variables, if any, to the output variables is not characterizable.

To compensate insufficiency of the screening design, it is enhanced to include center point of each inputvariable in experimentations. The center point of each input variable allows a quadratic effect, minimumor maximum inside explored space, between input variables and output variables to be identifiable, ifone exists. The enhancement is commonly known as response surface design to provide quadratic re-sponse model of responses. The quadratic response model can be calibrated using full factorial design(all combinations of each level of input variable) with three or more levels. However, the full factorialdesigns generally require more samples than necessary to accurately estimate model parameters. Inlight of the deficiency, a statistical procedure is developed to devise much more efficient experimentdesigns using three or five levels of each factor but not all combinations of levels, known as fractionalfactorial designs. Among these fractional factorial designs, the two most popular response surfacedesigns are Central Composite Designs (CCDs) and Box-Behnken designs (BBMs).

Design of Experiments types are:

Central Composite Design (p. 214)Box-Behnken Design (p. 215)

Response Surfaces are created using:

Standard Response Surface - Full 2nd-Order Polynomial algorithms (p. 216)Kriging Algorithms (p. 217)

213Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 222: Design Exploration Users Guide

Non-Parametric Regression Algorithms (p. 218)Sparse Grid Algorithms (p. 221)

Central Composite Design

Central Composite Designs, also known as Box-Wilson Designs, are a five-level fractional factorial designthat is suitable for calibrating the quadratic response model. There are three types of CCDs that arecommonly used in experiment designs: circumscribed, inscribed, and face-centered CCDs. The five-levelcoded values of each factor are represented by [-α, -1,0+1,+α], where [-1,+1] corresponds to the phys-ical lower and upper limit of the explored factor space. It is obvious that [-α,+α] establishes new “extreme”physical lower and upper limits for all factors. The value of α varies depending on design property andnumber of factors in the study. For the circumscribed CCDs, considered to be the original form ofCentral Composite Designs, the value of α is greater than 1. The following is a geometrical representationsof a circumscribed CCD of three factors:

Example 1: Circumscribed

Inscribed CCDs, on the contrary, are designed using [-1,+1] as “true” physical lower and upper limitsfor experiments. The five-level coded values of inscribed CCDs are evaluated by scaling down CCDs bythe value of evaluated from circumscribed CCDs. For the inscribed CCDs, the five-level coded valuesare labeled as [-1,-1/α,0,+1/α,+1]. The following is a geometrical representation of an inscribed CCD ofthree factors:

circumscribed

Example 2: Inscribed

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.214

DesignXplorer Theory

Page 223: Design Exploration Users Guide

Face-centered CCDs are a special case of Central Composite Designs, in which α=1. As a result, the face-centered CCDs become a three-level design that is located at the center of each face formed by anytwo factors. The following is a geometrical representation of a face-centered CCD of three factors:

Example 3: Face-Centered

Box-Behnken Design

Unlike the Central Composite Design, the Box-Behnken Design is quadratic and does not contain em-bedded factorial or fractional factorial design. As a result, Box-Behnken Design has a limited capabilityof orthogonal blocking, compared to Central Composite Design. The main difference of Box-BehnkenDesign from Central Composite Design is that Box-Behnken is a three-level quadratic design, in whichthe explored space of factors is represented by [-1,0,+1]. The “true” physical lower and upper limitscorresponding to [-1,+1]. In this design, however, the sample combinations are treated such that theyare located at midpoints of edges formed by any two factors. The following is a geometry representationof Box-Behnken designs of three factors:

Example 4: Box-Behnken Designs of Three Factors

The circumscribed CCD generally provides a high quality of response prediction. However, the circum-scribed CCD requires factor level setting outside the physical range. In some industrial studies, due tophysical and/or economic constraints, the use of corner points (where all factors are at an extreme) isprohibited. In this case, if possible, factor spacing/range can be planned out in advance to ensure [-α,+α] for each coded factor falling within feasible/reasonable levels.

Unlike the circumscribed CCD, the inscribed CCD uses only points within factor levels originally specified.In other words, the inscribed CCD does not have the “corner point constraint” issue as in the circum-

215Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Response Surfaces

Page 224: Design Exploration Users Guide

scribed CCD. The improvement, however, compromises the accuracy of response prediction near thefactor limits (i.e., the lower and upper limits) of each factor. As a result, the inscribed CCD does notprovide the same high quality of response prediction compared to the circumscribed CCD. The inscribedCCD provides a good response prediction over the central subset of factor space.

The face-centered CCD, like the inscribed CCD, does not require points outside the original factor range.The face-centered CCD, however, provides relatively high quality of response prediction over the entireexplored factor space in comparison to the inscribed CCD. The drawback of the face-centered CCD isthat it gives a poor estimate of the pure quadratic coefficient, i.e., the coefficient of square term of afactor.

For relatively the same accuracy, Box-Behnken Design is more efficient compared to Central CompositeDesign in cases involving three or four factors, as it requires fewer treatments of factor level combinations.However, like the inscribed CCD, the prediction at the extremes (corner points) is poor. The propertyof “missing corners” may be useful if these should be avoided due to physical and/or economic con-straints, because potential of data loss in those cases can be prevented.

Standard Response Surface - Full 2nd-Order Polynomial algorithms

The meta-modeling algorithms are categorized as:

General Definitions (p. 216)Linear Regression Analysis (p. 217)

General Definitions

The error sum of squares SSE is:

(2)i ii

nT= − = − −

=∑ ^ ^ ^2

1

where:

yi = value of the output parameter at the ith sampling point

��

= value of the regression model at the ith sampling point

The regression sum of squares SSR is:

(3)��

= −=∑ � �

where:

==∑

The total sum of squares SST is:

(4)

= −=∑ �

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.216

DesignXplorer Theory

Page 225: Design Exploration Users Guide

For linear regression analysis the relationship between these sums of squares is:

(5)= +

Linear Regression Analysis

For a linear regression analysis, the regression model at any sampled location {x}i, with i = 1, ... , n in

the m-dimensional space of the input variables can be written as:

(6)i i= + ε

where:

t�

= row vector of regression terms of the response surface model at the ith sampled location

{c} = c c cp

T

1 2... = vector of the regression parameters of the regression modelp = total number of regression parameters. For linear regression analysis, the number of regres-sion parameters is identical to the number of regression terms.

The regression coefficients {c} can be calculated from:

(7)� �= −�

Kriging Algorithms

Kriging postulates a combination of a polynomial model plus departures of the form given by:

(8)= +

where y(x) is the unknown function of interest, f(x) is a polynomial function of x, and Z(x) is the realization

of a normally distributed Gaussian random process with mean zero, variance σ2, and non-zero covariance.The f(x) term in is similar to the polynomial model in a response surface and provides a “global” modelof the design space.

While f(x) “globally” approximates the design space, Z(x) creates “localized” deviations so that the Krigingmodel interpolates the N sample data points. The covariance matrix of Z(x) is given by:

(9)� j � j

= σ�

In R is the correlation matrix, and r(xi, xj) is the spatial correlation of the function between any two of

the N sample points xi and xj. R is an N*N symmetric, positive definite matrix with ones along the diag-

onal. The correlation function r(xi, xj) is Gaussian correlation function:

(10)� � k k�

k

k

M

= − −

=

∑ θ�

The θk in are the unknown parameters used to fit the model, M is the number of design variables, and

and �

are the kth components of sample points xi and xj. In some cases, using a single correlationparameter gives sufficiently good results; you can specify the use of a single correlation parameter, or

217Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Response Surfaces

Page 226: Design Exploration Users Guide

one correlation parameter for each design variable (Tools>Options>Design Exploration>Response Sur-face>Kriging Options>Kernel Variation Type: Variable or Constant).

Z(x) can be written :

(11)ii

i

N

==∑ λ1

Non-Parametric Regression Algorithms

Let the input sample (as generated from a DOE method) be X = {x1 , x2, x3, …, xM}, where each xi is an

N-dimensional vector and represents an input variable. The objective is to determine the equation ofthe form:

(12)=< > +

Where, W is a weighting vector. In case of generic non-parametric case, the Equation 12 (p. 218) is rewrit-ten as the following:

(13)� � ��

= − +=∑ *

r r

Where, �

r r

is the kernel map and the quantities Ai and ��

are Lagrange multipliers whose deriv-

ation will be shown in later sections.

In order to determine the Lagrange multipliers, we start with the assumption that the weight vector Wmust be minimized such that all (or most) of the sample points lie within an error zone around the fittedsurface. For a simple demonstration of this concept, please see Figure 4: Fitting a regression line for agroup of sample points with a tolerance of ε which is characterized by slack variables ξ* and ξ (p. 219).

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.218

DesignXplorer Theory

Page 227: Design Exploration Users Guide

Figure 4: Fitting a regression line for a group of sample points with a tolerance of ε which is

characterized by slack variables ξ* and ξ

ε

ε

ε

ξ

ξ ∗

Thus, we write a primal optimization formulation as the following:

(14)

Minimize L

S.T.

= + +

− < > + ≤ +

<

=∑

2

1

*� �

N

� � �

ξ ξ

ε ξ

� �> + ≤ + *ε ξ

Where, C is an arbitrary (> 0) constant. In order to characterize the tolerance properly, we define a lossfunction in the interval (-ε, ε) which de facto becomes the residual of the solution. In the present imple-mentation, we use the ε-insensitive loss function which is given by the equation:

(15)ε

ε

ε

ε

= ∀ − <

= − −

The primal problem in Equation 14 (p. 219) can thus be rewritten as the following using generalized lossfunctions as:

(16)

� �

� �

� � �

= + +

< > + − ≤ +

=∑

ξ ξ

ξ ξ

ε ξ

����

−− < > − ≤ +� ��ε ξ

219Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Response Surfaces

Page 228: Design Exploration Users Guide

In order to solve this efficiently, a Lagrangian dual formulation is done on this to yield the followingexpression:

(17)

i i ii

N

i

N

i i i= + + + < > + − +

+

==∑∑

2

11

* *ξ ξ ε ξ

iii

N

i i i

i i i ii

N

* *

* *

=

=

− < > − − +

+ +

1

1

ε ξ

ηξ η ξ

After some simplification, the dual Lagrangian can be written as the following Equation 18 (p. 220):

(18)

m�n L = − − < >

+ − −

==

=

∑∑��

� �

�j

j j � j

� � ��

�

� �

��

��

��

+∂

∂−

+

∂=

��

ξξ

ξξ ξ

ξ

��

(�)

(�)(

− =

≤ ≤∂

=

=

��

� �

��

ξ

ξ

S.T.

��)

(�)

(�)(�)

(�)

(�)

=∂

∂≥

ξ

ξ ξξ

ξ

is a quadratic constrained optimization problem, and the design variables are the vector A. Once thisis computed, the constant b in can be obtained by the application of Karush-Kuhn-Tucker (KKT) condi-tions. Once the ε-insensitive loss functions are used instead of the generic loss functions l(ξ) in , thiscan be rewritten in a much simpler form as the following:

(19)

A A ��

� � � � �

��

,

� �

= − −

+∑

==

=

∑∑

ε ++ − −

≤ ≤

∑ − ==

� � �

��

���

���

Which is be solved by a QP optimizer to yield the Lagrange multipliers �

���

and the constant b is ob-tained by applying the KKT conditions.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.220

DesignXplorer Theory

Page 229: Design Exploration Users Guide

Sparse Grid Algorithms

The Sparse Grid metamodeling is a hierarchical Sparse Grid interpolation algorithm based on piecewisemultilinear basis functions.

The first ingredient of a Sparse Grid method is a one-dimensional multilevel basis.

Piecewise linear hierarchical basis (from the level 0 to the level 3):

The calculation of coefficients values associated to a piecewise linear basis is hierarchical and obtainedby the differences between the values of the objective function and the evaluation of the current SparseGrid interpolation.

Example 5: Interpolation with the Hierarchical Basis

For a multidimensional problem, the Sparse Grid metamodeling is based on piecewise multilinear basisfunctions which are obtained by a sparse tensor product construction of one-dimensional multilevelbasis.

Let denote a multi index, where d is the problem dimension (number of input parameters)and li corresponds to the level in the i-th direction.

The generation of the Sparse Grid W1 is obtained by the following tensor

product:

221Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Response Surfaces

Page 230: Design Exploration Users Guide

Example 6: Tensor Product Approach to Generate the Piecewise Bilinear Basis Functions W2,0

Tensor product of linear basis functions for a two-dimensional prob-

lem:

To generate a new Sparse Grid Wl', any Sparse Grid W1 that meets the order relation Wl < Wl' needs to

be generated before:

means:

with

and

For example, in two-dimensional problem, the generation of the grid W2,1 required several steps:

1. 1. Generation of W0,0

2. 2. Generation of W0,1 and W1,0

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.222

DesignXplorer Theory

Page 231: Design Exploration Users Guide

3. 3. Generation of W2,0 and W1,1

The calculation of coefficients values associated to a piecewise multilinear basis is similar to the calcu-lation of coefficients of linear basis: the coefficients are obtained by the differences between the valuesof the objective function on the new grid and the evaluation (of the same grid) with the current SparseGrid interpolation (based on old grids).

We can observe for higher-dimensional problem that not all input variables carry equal weight. A regularSparse Grid refinement can leads to too many support nodes: that is why the Sparse Grid metamodelinguses a dimension-adaptive algorithm to automatically detect separability and which dimensions are themore or the less important ones to reduce computational effort for the objectives functions.

The hierarchical structure is used to obtain an estimate of the current approximation error. This currentapproximation error is used to choose the relevant direction to refine the Sparse Grids. In other words,if the approximation error has been found with the Sparse Grid Wl the next iteration consists in thegeneration of new Sparse Grids obtained by incrementing of each dimension level of Wl (one by one)as far as possible: the refinement of W2,1 can generate 2 new Sparse Grids W3,1 and W2,2 (if the W3,0

and W1,2 already exist)

The Sparse Grid metamodeling stops automatically when the desired accuracy is reached or when themaximum depth is met in all directions (the maximum depth corresponds to the maximum number ofhierarchical interpolation levels to compute : if the maximum depth is reached in one direction, thedirection is not refined further).

The new generation of the Sparse Grid allows as many linear basis functions as there are points of dis-cretization.

All Sparse Grids generated by the tensor product contain only one point that allows refinement morelocally.

Sparse grid metamodeling is more efficient with a more local refinement process that uses less designpoints, as well as, reaching the requested accuracy faster.

223Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Response Surfaces

Page 232: Design Exploration Users Guide

Sparse Grid related topics:

Sparse Grid (p. 84)

Understanding Goal Driven Optimization

Goal Driven Optimization (GDO) is a set of constrained, multi-objective optimization techniques in whichthe "best" possible designs are obtained from a sample set given the objectives you set for parameters.The available optimization methods are:

• Screening

• MOGA

• NLPQL

• MISQP

• Adaptive Single-Objective

• Adaptive Multiple-Objective

The Screening, MISQP, and MOGA optimization methods can be used with discrete parameters. TheScreening, MISQP, MOGA, Adaptive Multiple-Objective, and Adaptive Single-Objective optimizationmethods can be used with continuous parameters with Manufacturable Values.

The GDO process allows you to determine the effect on input parameters with certain objectives appliedfor the output parameters. For example, in a structural engineering design problem, you may want todetermine combination of design parameters best satisfy minimum mass, maximum natural frequency,maximum buckling and shear strengths, and minimum cost, with maximum value constraints on thevon Mises stress and maximum displacement.

This section describes GDO and its use in performing single- and multiple-objective optimization.Principles (GDO)Guidelines and Best Practices (GDO)Goal Driven Optimization Theory

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.224

DesignXplorer Theory

Page 233: Design Exploration Users Guide

Principles (GDO)

You can apply Goal Driven Optimization to design optimization by using any of the following methods:Screening, MOGA, NLPQL, MISQP, Adaptive Single-Objective, or Adaptive Multiple-Objective.

• The Screening approach is a non-iterative direct sampling method by a quasi-random number gen-erator based on the Hammersley algorithm.

• The MOGA approach is an iterative Multi-Objective Genetic Algorithm, which can optimize problemswith continuous input parameters.

• The NLPQL approach is a gradient-based, single-objective optimizer which is based on quasi-Newtonmethods.

• The MISQP approach is a gradient-based, single-objective optimizer that solves mixed-integer non-linear programming problems by a modified sequential quadratic programming (SQP) method.

• The Adaptive Single-Objective approach is a gradient-based, single-objective optimizer that employsan OSF Design of Experiments, a Kriging response surface, and the MISQP optimization algorithm.

• The Adaptive Multiple-Objective approach is an iterative, multi-objective optimizer that employsa Kriging response surface and the MOGA algorithm. In this method, the use of Kriging allows for amore rapid optimization process because:

– Except when necessary, it does not evaluate all design points.

– Part of the population is “simulated” by evaluations of the Kriging, which is constructed of all thedesign points submitted by the MOGA algorithm.

MOGA is better for calculating the global optima, while NLPQL and MISQP are gradient-based algorithmsideally suited for local optimization. So you can start with Screening or MOGA to locate the multipletentative optima and then refine with NLPQL or MISQP to zoom in on the individual local maximum orminimum value.

The GDO framework uses a Decision Support Process (DSP) based on satisfying criteria as applied tothe parameter attributes using a weighted aggregate method. In effect, the DSP can be viewed as apostprocessing action on the Pareto fronts as generated from the results of the various optimizationmethods.

Usually the Screening approach is used for preliminary design, which may lead you to apply one ofthe other approaches for more refined optimization results. Note that running a new optimization causesa new sample set to be generated.

In either approach, the Tradeoff chart, as applied to the resulting sample set, shows the Pareto-dominantsolutions. However, in the MOGA approach, the Pareto fronts are better articulated and most of thefeasible solutions lie on the first front, as opposed to the usual results of the Screening approach wherethe solutions are distributed across all the Pareto fronts. This is illustrated in the following two figures.Figure 5: 6,000 Sample Points Generated by Screening Method (p. 226) shows the sample as generatedby the Screening method, and Figure 6: Final Sample Set After 5,400 Evaluations by MOGA Method (p. 226)shows a sample set generated by the MOGA method for the same problem.

225Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 234: Design Exploration Users Guide

Figure 5: 6,000 Sample Points Generated by Screening Method

Figure 6: Final Sample Set After 5,400 Evaluations by MOGA Method

Figure 7: Pareto Optimal Front Showing Two Non-Dominated Solutions (p. 227) shows the necessity ofgenerating Pareto fronts. The “first Pareto front” or “Pareto frontier” is the list of non-dominated points

for the optimization.

A “dominated point” is a point that, when considered in regard to another point, is not the best solutionfor any of the optimization objectives. For example, if point A and point B are both defined, point B isa dominated point when point A is the better solution for all objectives.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.226

DesignXplorer Theory

Page 235: Design Exploration Users Guide

In the example below, the two axes represent two output parameters with conflicting objectives: theX axis represents Minimize P6 (WB_V) and the Y axis represents Maximize P9 (WB_BUCK). The chartshows two optimal solutions, point 1 and point 2. These solutions are “non-dominated,” which meansthat both points are equally good in terms of Pareto optimality, but for different objectives. In this ex-ample, point 1 is the better solution for Minimize P6 (WB_V) and point 2 is the better solution forMaximize P9 (WB_BUCK). Neither point is strictly dominated by any other point, so are both includedon the first Pareto front.

Figure 7: Pareto Optimal Front Showing Two Non-Dominated Solutions

Guidelines and Best Practices (GDO)

Typically, the use of NLPQL or MISQP is suggested for continuous problems when there is only oneobjective function. The problem may or may not be constrained and must be analytic (i.e. must bedefined only by continuous input parameters, and the objective functions and constraints should notexhibit sudden 'jumps' in their domain.) The main difference from MOGA lies in the fact that MOGA isdesigned to work with multiple objectives and does not require full continuity of the output parameters.However, for continuous single objective problems, the use of NLPQL or MISQP gives greater accuracyof the solution as gradient information and line search methods are used in the optimization iterations.MOGA is a global optimizer designed to avoid local optima traps, while NLPQL and MISQP are localoptimizers designed for accuracy.

The default convergence rate for the NLPQL and MISQP algorithms is set to 1.0E-06. This is computedbased on the (normalized) Karush-Kuhn-Tucker (KKT) condition. This implies that the fastest convergencerate of the derivatives or the functions (objective function and constraint) determine the terminationof the algorithm. The advantage of this approach is that for large problems, it is possible to get a near-optimal feasible solution quickly without being trapped into a series of iterations involving small solutionsteps near the optima. However, if the user prefers a more numerically accurate solution which mayinvolve these iterations, then the convergence rate can be set to as low as 1.0E-12. The range of (1.0E-06 - 1.0E-12) covers a vast majority of all optimization problems and the choice of the convergence rateis based on the user's preference.

227Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 236: Design Exploration Users Guide

Screening is the only optimization method that does not require at least one objective. To use any ofthe other optimization methods, you must have an Objective defined for at least one of the outputparameters (only one output can have an objective for the NLPQL, MISQP, and Adaptive Single-Objectivemethods).

If this is not done, then the optimization problem is either undefined (Objective Type for each para-meter set to No Objective) or is merely a constraint satisfaction problem (Constraint Type set to avalue other than No Constraint) When the problem is not defined, as in Figure 8: Case Where the Op-timization Cannot be Run (p. 228), the MOGA, NLPQL, MISQP, Adaptive Single-Objective, or Adaptive

Multiple-Objective analysis cannot be run.

If the problem defined is only one of constraint satisfaction, as in Figure 9: Case Where GDO Solves aConstraint Satisfaction Problem (p. 229), the single Pareto fronts displayed (from the Tradeoff chart)represents the feasible points that meet the constraints. The unfeasible points that do not satisfy theconstraints can also be displayed on the chart. This is a good way to demonstrate the feasibilityboundaries of the problem in the output space (i.e., to determine the boundaries of the design envelopethat reflects your preferences). unfeasible points can also be displayed on any chart where the optimiz-ation contains at least one constraint. As shown in Figure 8: Case Where the Optimization Cannot beRun (p. 228), if the entire Objective Type column is set to No Objective, the optimization process cannotbe run, as no outputs have objectives specified which could drive the algorithm. The fact that user inputis required before the algorithm can be run is reflected both by the state icon and the Quick Help ofthe Optimization cell on the Project Schematic view and of the main node of the Outline in the Op-

timization tab. If you try to update the optimization with no objectives set, you will also get an errormessage in the Message view.

Figure 8: Case Where the Optimization Cannot be Run

As shown in Figure 9: Case Where GDO Solves a Constraint Satisfaction Problem (p. 229), some combin-ations of Constraint Type, Lower Bound, and possibly Upper Bound settings specify that the problemis one of constraint satisfaction. The results obtained from these settings indicate only the boundariesof the feasible region in the design space. Currently only the Screening option can be used to solve apure constraint satisfaction problem.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.228

DesignXplorer Theory

Page 237: Design Exploration Users Guide

Figure 9: Case Where GDO Solves a Constraint Satisfaction Problem

The Objective settings of the input parameters do not affect the sample generation of the MOGA,NLPQL, MISQP, Adaptive Single-Objective, or Adaptive Multiple-Objective methods. However, thissetting will affect the ranking of the samples which generate the candidate designs, if the Optimizationis updated.

In a Screening optimization, sample generation is driven by the domain definition (i.e., the lower andupper bounds for the parameter) and is not affected by parameter objective and constraint settings.The parameter objective and constraint settings, however, do affect the generation of candidate points.

When a constraint is defined, it is treated as a constraint if Constraint Handling in the OptimizationProperties view is set to Strict. If Constraint Handling is set to Relaxed, those constraints are actuallytreated as objectives. If only constraints are defined, Screening is your only optimization option. If atleast one objective is defined, the other optimization methods are also available.

Typically, the Screening method is best suited for conducting a preliminary design study, because it isa low-resolution, fast, and exhaustive study that can be useful in locating approximate solutions quickly.Since the Screening method does not depend on any parameter objectives, you can change the object-ives after performing the Screening analysis to view the candidates that meet different objective sets,allowing you to quickly perform preliminary design studies. It's easy to keep changing the objectivesand constraints to view the different corresponding candidates, which are drawn from the originalsample set.

For example, you may run the Screening process for 3,000 samples, then use the Tradeoff or Samples

charts to view the Pareto fronts. The solutions slider can be used in the chart Properties to display onlythe prominent points you may be interested in (usually the first few fronts). When you do a MOGA orAdaptive Multiple-Objective optimization, you can limit the number of Pareto fronts that are computedin the analysis.

Once all of your objectives are defined, click Update Optimization to generate up to the requestednumber of candidate points. You may save any of the candidates by right-clicking and selecting Explore

Response Surface at Point, Insert as Design Point(s), or Insert as Refinement Point(s).

When working with a Response Surface Optimization system, you should validate the best obtainedcandidate results by saving the corresponding design points, solving them at the Project level, andcomparing the results.

Goal Driven Optimization Theory

In this section, we’ll discuss theoretical aspects of Goal Driven Optimization and the different optimizationmethods available in DesignXplorer.

229Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 238: Design Exploration Users Guide

Sampling for Constrained Design SpacesShifted Hammersley Sampling (Screening)Single-Objective Optimization MethodsMultiple-Objective Optimization MethodsDecision Support Process

Sampling for Constrained Design Spaces

Common samplings created in a constrained design space might produce unfeasible sample points ifthe underlying formulation disregards the constraints. DesignXplorer provides a sampling type thattakes into account the constraints defined for input parameters.

DesignXplorer generates sampling:

S X Y( )

Subject to:

g � � j m

� � �

� � �

N

N

L U

L U

c

i

( ) ≤ = …

≤ ≤

≤ ≤

R

N

The symbols X and Y denote the vectors of the continuous and the integer variables, respectively.DesignXplorer allows you to define a constrained design space with linear or non-linear constraints.

Examples of linear constraints:

x x

x x

1 2

1 2

+ ≤

Examples of non-linear constraints:

� �

��

��

+ ≤

The constraint sampling is a heuristic method based on Shifted-Hammersley (Screening) and MISQPsampling methods.

For a given screening of N sample points generated in the hypercube of input parameters, only a partof this sampling (M) is within the constrained design space (i.e. the feasible domain).

To obtain a constraint sampling that is close to a uniform sampling of Nf points, DesignXplorer solvesthis problem first:

�Card E �f E � �P P( ) −( ) ={ }

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.230

DesignXplorer Theory

Page 239: Design Exploration Users Guide

If ( )

the sampling is completed.

If ( ) >

, only the Nf furthermost sample points are kept.

Otherwise, DesignXplorer needs to create additional points to reach the Nf sample points.

The Screening is not guaranteed to find either enough sample points or at least one feasible point. Forthis reason, DesignXplorer solves an MISQP problem for each constraint on input parameters, as follows:

C g X Yj� �

j= ( ),

Subject to:

� � � � m� ( ) ≤ = …

In the best case, we obtain m feasible points.

If A is the center of mass of the feasible points, then:

A�

= ∑=1

DesignXplorer then solves the new MISQP problem:

ɶ P P� � �

= − 2

Subject to:

� � � � �� ( ) ≤ = …

Once the point closest to the center mass of the feasible points has been found, DesignXplorer projectspart of the unfeasible points onto the skin of the feasible domain.

Given an unfeasible point B and feasible point G, we can build a new feasible point C:

� G GB= +αu ruu

To find C close to the skin of the feasible domain, the algorithm starts with α equals to 1, and decreasesα value until C is feasible.

231Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 240: Design Exploration Users Guide

To optimize the distribution of point on the skin of the feasible domain, the algorithm chooses thefurthermost unfeasible points in terms of angles:

If B is the first unfeasible point processed, the next one is:

X

GBG�

GB G�

u ruu u ruu

u ruuuuPu ruuuuuu

P P P2 2

Once enough points have been generated on the skin of the feasible domain, the algorithm generatesinternal points. The internal points are obtained by combination of internal points and points generatedon the skin of the feasible domain.

Shifted Hammersley Sampling (Screening)

The Shifted Hammersley optimization method is the sampling strategy used for all sample generation,except for the NLPQL method. The conventional Hammersley sampling algorithm is a quasi-randomnumber generator which has very low discrepancy and is used for quasi-Monte Carlo simulations. Alow-discrepancy sequence is defined as a sequence of points that approximate the equidistribution ina multi-dimensional cube in an optimal way. In other words, the design space is populated almostuniformly by these sequences and, due to the inherent properties of Monte Carlo sampling, dimension-ality is not a problem (i.e., the number of points does not increase exponentially with an increase inthe number of input parameters). The conventional Hammersley algorithm is constructed by using theradical inverse function. Any integer n can be represented as a sequence of digits n0, n1, n2, ..., nm by

the following equation:

(20)m= 0 1 � 3⋯

For example, consider the integer 687459, which can be represented this way as n0 = 6, n1 = 8, and so

on. Because this integer is represented with radix 10, we can write it as 687459 = 9 + 5 * 10 + 4 * 100and so on. In general, for a radix R representation, the equation is:

(21)� �= + + +−� �⋯

The inverse radical function is defined as the function which generates a fraction in (0, 1) by reversingthe order of the digits in Equation 21 (p. 232) about the decimal point, as shown below.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.232

DesignXplorer Theory

Page 241: Design Exploration Users Guide

(22)ΦR m m m

m mm( )

=

= + + +

− −−

−− − −

1 2 0

11

20

1

Thus, for a k-dimensional search space, the Hammersley points are given by the following expression:

(23)k � � �k= −Φ Φ Φ� � �⋯

where i = 0, ..., N indicates the sample points. Now, from the plot of these points, it is seen that the firstrow (corresponding to the first sample point) of the Hammersley matrix is zero and the last row is not1. This implies that, for the k-dimensional hypercube, the Hammersley sampler generates a block ofpoints that are skewed more toward the origin of the cube and away from the far edges and faces. Tocompensate for this bias, a point-shifting process is proposed that shifts all Hammersley points by theamount below:

(24)∆ =

This moves the point set more toward the center of the search space and avoids unnecessary bias.Thus, the initial population always provides unbiased, low-discrepancy coverage of the search space.

Single-Objective Optimization Methods

In this section, we’ll discuss the different types of single-objective optimization methods.Nonlinear Programming by Quadratic Lagrangian (NLPQL)Mixed-Integer Sequential Quadratic Programming (MISQP)Adaptive Single-Objective Optimization (ASO)

Nonlinear Programming by Quadratic Lagrangian (NLPQL)

NLPQL (Nonlinear Programming by Quadratic Lagrangian) is a mathematical optimization algorithm asdeveloped by Klaus Schittkowski. This method solves constrained nonlinear programming problems ofthe form

Minimize:

=

Subject to:

� ≤ ∀ =

l = ∀ =

where

L U≤ ≤

It is assumed that objective function and constraints are continuously differentiable. The idea is togenerate a sequence of quadratic programming subproblems obtained by a quadratic approximationof the Lagrangian function and a linearization of the constraints. Second order information is updatedby a quasi-Newton formula and the method is stabilized by an additional (Armijo) line search.

233Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 242: Design Exploration Users Guide

The method presupposes that the problem size is not too large and that it is well-scaled. Also, the ac-curacy of the methods depends on the accuracy of the gradients. Since, for most practical problems,analytical gradients are unavailable, it is imperative that the numerical (finite difference based) gradientsare as accurate as possible.

Newton's Iterative Method

Before the actual derivation of the NLPQL equations, Newton’s iterative method for the solution ofnonlinear equation sets is reviewed. Let f(x) be a multivariable function such that it can be expandedabout the point x in a Taylor’s series.

(25)T T+ ∆ ≈ + ∆ ′ +

∆ ′′ ∆

where, it is assumed that the Taylor series actually models a local area of the function by a quadraticapproximation. The objective is to devise an iterative scheme by linearizing the vector Equation 25 (p. 234).To this end, it is assumed that at the end of the iterative cycle, the Equation 25 (p. 234) would be exactlyvalid. This implies that the first variation of the following expression with respect to ∆x must be zero.

(26)φ ∆ = + ∆ − + ∆ ′ +

∆ ′′ ∆

� �

which implies that

(27)x+ ∆ − ′ + ′′ ∆ =∆

The first expression indicates the first variation of the converged solution with respect to the incrementin the independent variable vector. This gradient is necessarily zero since the converged solution clearlydoes not depend on the step-length. Thus, Equation 27 (p. 234) can be written as the following:

(28)j j j j+−= − ′′ ′

11

where, the index "j" indicates the iteration. Equation 28 (p. 234) is thus used in the main quadratic pro-gramming scheme.

NLPQL Derivation:

Consider the following single-objective nonlinear optimization problem. It is assumed that the problemis smooth and analytic throughout and is a problem of N decision variables.

Minimize:

=

Subject to:

k ≤ ∀ =

l = ∀ =

where

(29)L U≤ ≤

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.234

DesignXplorer Theory

Page 243: Design Exploration Users Guide

Where, K and L are the numbers of inequality and equality constraints. In many cases the inequalityconstraints are bound above and below, in such cases, it is customary to split the constraint into twoinequality constraints. In order to approximate the quadratic sub-problem assuming the presence ofonly equality constraints, the Lagrangian for Equation 29 (p. 234) as:

(30)Λ Tλ λ= +

Where, {λ} is a L dimensional vector (non-zero) which are used as Lagrange multipliers. Thus, Equa-tion 30 (p. 235) becomes a functional, which depends on two sets of independent vectors. In order tominimize this expression, we seek the stationarity of this functional with respect to the two sets ofvectors. These expressions give rise to two sets of vector expressions as the following:

(31)Λ Λ

Λ

{ }

{ }

xλ λ

λ λ

= ∇ = ∇ + =

= ==

Equation 31 (p. 235) defines the Karush-Kuhn-Tucker (KKT) conditions which are the necessary conditionsfor the existence of the optimal point. The first equation comprises "N" nonlinear algebraic equationsand the second one comprises "L" nonlinear algebraic equations. The matrix [H] is a (N*L) matrix definedas the following:

(32)( * )N L = ∇ ∇ ∇1 2 3

Thus, in Equation 32 (p. 235), each column is a gradient of the corresponding equality constraint. Forour convenience, the nonlinear equations of Equation 31 (p. 235) can be written as the following:

(33)=

Where, Equation 33 (p. 235) is a (N+L) system of nonlinear equations. The independent variable set {Y}can be written as:

(34)=

λ

while the functional set {F} is written as the following:

(35)=∇∧

Referring to the section on Newton based methods, the vector {Y} in Equation 34 (p. 235) is updated asthe following:

(36)j j j+ = + ∆�

The increment of the vector is given by the iterative scheme in Equation 28 (p. 234). Referring to Equa-tion 33 (p. 235), Equation 34 (p. 235), and Equation 35 (p. 235), the iterative equation is expressed as thefollowing:

(37)∇ ∆ = −� � �

Note that this is only a first order approximation of the Taylor expansion of Equation 33 (p. 235). This isin contrast to Equation 28 (p. 234) where a quadratic approximation is done. This is because in Equa-tion 34 (p. 235), a first order approximation has already been done. The matrices and the vectors inEquation 37 (p. 235) can be expanded as the following:

235Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 244: Design Exploration Users Guide

(38)∆ =∆

λ

and

(39)( * ) ( * )

( * ) ( * )(( *(

∇ =∇

+ +

N N N L

TL N L L

N L N L

)))

This is obtained by taking the gradients of Equation 35 (p. 235) with respect to the two variable sets.

The sub-matrix � � �∇�Λ � � is the (N*N) Hessian of the Lagrange function in implicit form.

To demonstrate how Equation 39 (p. 236) is formed, let us consider the following simple case. Given avector of two variables x and y, we write:

=

=

It is required to find the gradient (i.e. Jacobian of the vector function V ). To this effect, the derivationof the Jacobian is evident because in the present context, the vector V indicates a set of nonlinear (al-gebraic) equations and the Jacobian is the coefficient matrix which "links" the increment of the inde-pendent variable vector to the dependent variable vector. Thus, we can write:

∆ =∆

=

∂∂∂∂∂∂∂∂

∂∂

∂∂∂

∂∂

∂∂

Thus, the Jacobian matrix is formed. In some applications, the equation is written in the following form:

∆ =∆

=

∂∂

∂∂

∂∂

∂∂

∂∂

∂∂∂

∂∂

∂∂

Where, the column "i" indicates the gradient of the "i-th" component of the dependent variable vectorwith respect to the independent vector. This is the formalism we use in determining Equation 39 (p. 236).

Equation 37 (p. 235) may be rewritten as the following:

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.236

DesignXplorer Theory

Page 245: Design Exploration Users Guide

(40)( * ) ( * )

( * ) ( * )

( )

2Λ N N N L

TL N L L

j

λ

= −∇

( ) ( )j jΛ

Solution of Equation 40 (p. 237) iteratively will solve Equation 41 (p. 237) in a series of linear steps till thepoint when the increment is negligible. The update schemes for the independent variable and Lagrangemultiplier vectors x and λ are written as the following:

(41)� � �

� � �

+

+

= + ∆

= + ∆

1

1λ λ λ

The individual equations of the Equation 40 (p. 237) are written separately now. The first equation (cor-responding to minimization with respect to x) may be written as:

(42)

∇ ∆ + ∆ = − ∇

∇ ∆ + ∆ = − ∇ −

Λ Λ

Λ

� � � � �

� � � � �

λ

λ � �

� � � � �

λ

λ∇ ∆ + = − ∇+�

�Λ

The last step in Equation 42 (p. 237) is done by using Equation 41 (p. 237). Thus, using Equation 42 (p. 237)and Equation 40 (p. 237), the iterative scheme can be rewritten in a simplified form as:

(43)� � � � � �

� � � � � �

� �� �

∆�Λ � � �

��

�λ ++

= −

��

� ��

Thus, Equation 43 (p. 237) can be used directly to compute the (j+1)th value of the Lagrange multipliervector. Note that by using Equation 43 (p. 237), it is possible to compute the update of x and the newvalue of the Lagrange multiplier vector λ in the same iterative step. Equation 43 (p. 237) shows thegeneral scheme by which the KKT optimality condition can be solved iteratively for a generalized op-timization problem.

Now, consider the following quadratic approximation problem as given by:

(44) = ∇ ∆ +

∆ ∇ ∆�Λ

Subject to the equality constraints given by the following:

(45)� � � � � ���� �� + ∆ =

Where the definition of the matrices in Equation 44 (p. 237) and Equation 45 (p. 237) are given earlier. Inorder to solve the quadratic minimization problem let us form the Lagrangian as:

(46)Γ Λ= ∇ ∆ +

∆ ∇ ∆ + + ∆� � � � �� λ λ

Now, the KKT conditions may derived (as done earlier) by taking gradients of the Lagrangian in Equa-tion 46 (p. 237) as the following:

237Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 246: Design Exploration Users Guide

(47)Γ Λ

Γ

{ }

{ }

∆ = ∇ + ∇ ∆ + =

= + ∆ =

x

T

2 λ

λ

In a condensed matrix form Equation 47 (p. 238) may be written as the following:

(48)∇

= −

�Λ� λ

Equation 48 (p. 238) is the same as Equation 43 (p. 237) – this implies that the iterative scheme in Equa-tion 48 (p. 238) which actually solves a quadratic subproblem (Equation 44 (p. 237) and Equation 45 (p. 237))in the domain ∆x. In case the real problem is quadratic then this iterative scheme solves the exactproblem.

On addition of inequality constraints, the Lagrangian of the actual problem can be written as the fol-lowing:

(49)Λ � � �λ µ λ µ µ= + + + ��

Note that the inequality constraints have been converted to equality constraints by using a set of slackvariables y as the following:

(50)k k+ = ∀ =�

The squared term is used to ensure that the slack variable remains positive which is required to satisfyEquation 50 (p. 238). The Lagrangian in Equation 49 (p. 238) acts as an enhanced objective function. It isseen that the only case where the additional terms may be active is when the constraints are not satisfied.

The KKT conditions as derived from Equation 49 (p. 238) (by taking first variations with respect to theindependent variable vectors) are:

(51)

Λ Λ

Λ

� � ( * )

� � (

� N

L

= ∇ = ∇ + + =

= =

λ µ

λ

1

** )

� � ( * )

� � ( * )

1

�1

1

Λ

Λ

µ

µ

= = =

= =

M

y M

From the KKT conditions in Equation 51 (p. 238), it is evident that there are (N + L + 2*M) equations fora similar number of unknowns, thus this equation set possesses an unique solution. Let this (optimal)solution be marked as x. At this point, a certain number of constraints will be active and some otherswill be inactive. Let the number of active inequality constraints be p and the total number of activeequality constraints be q. By an active constraint it is assumed that the constraint is at its thresholdvalue of zero. Thus, let J1 and J2 be the sets of active and inactive equality constraints (respectively)

and K1 and K2 be the sets of active and inactive inequality constraints respectively. Thus, we can write

the following relations:

(52)

∩ ∪

= ∅ = = = −

= ∅ ∪ = = = −

Where, meas() indicates the count of the elements of the set under consideration. These sets thus par-tition the constraints into active and inactive sets. Thus, we can write:

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.238

DesignXplorer Theory

Page 247: Design Exploration Users Guide

(53)

k

j j j j

j j j

= ∀ ∈

= = ≠ ∀ ∈

< ≠ =

2

µ 1∀ ∈j

Thus, it can be stated that the last 3 equations in Equation 51 (p. 238) may be represented by theEquation 53 (p. 239). These are the optimality conditions for constraint satisfaction. From these equations,we can now eliminate y such that the Lagrangian in Equation 49 (p. 238) will depend on only 3 independ-ent variable vectors. From the last two conditions in Equation 53 (p. 239), we can write the followingcondition, which is always valid for an optimal point:

(54)µ � � =

Using Equation 54 (p. 239) in the set Equation 51 (p. 238), the KKT optimality conditions may be writtenas the following:

(55)

( * )

( * )

∇∧ = ∆ + + =

=

N

L

λ µ

µ

== ( * )�M

Thus, the new set contains only (N + L + M) unknowns. Now, following the same logic as in Equa-tion 34 (p. 235) as done earlier—let us express Equation 55 (p. 239) in the same form as in Equa-tion 33 (p. 235). This represents a (N + L + M) system of nonlinear equations. The independent variableset may be written in vector for as the following:

(56)=

λ

µ

Newton’s iterative scheme is also used here, thus, the same equations as in Equation 36 (p. 235) andEquation 37 (p. 235) also apply here. Thus, following Equation 37 (p. 235), we can write:

(57)∆ =

λ

µ

Taking the first variation of the KKT equations in Equation 55 (p. 239) and equating to zero, the sub-quadratic equation is formulated as the following:

(58)

�Λ T T

µ

δ

δλ

δµ= −

Λ

µ

Where µ indicates a diagonal matrix.

At the jth step, the first equation can be written as (by linearization)

(59)∇ ∆ + ∆ + ∆ = − ∇ − −�Λ � � � � � � � � � ��λ µ λ µ

Which simplifies to

239Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 248: Design Exploration Users Guide

(60)∇ ∆ + + = − ∇+ +2

1 1Λ j j j j j j�λ µ

Thus, the linearized set of equations for Newton’s method to be applied can be written in an explicitform as:

(61)

∆�Λ �T

�T

� � �

µ

λ ��

+

+

= −

�µ µ

So, in presence of both equality and inequality constraints, Equation 61 (p. 240) can be used in a quasi-Newtonian framework to determine the increments {∆x}j and Lagrange multipliers {λ}j+1 and {µ}j+1 when

stepping from iteration j to j+1.

The Hessian matrix ∇�Λ is not computed directly but is estimated and updated in a BFGS type line

search.

Mixed-Integer Sequential Quadratic Programming (MISQP)

MISQP (Mixed-Integer Sequential Quadratic Programming) is a mathematical optimization algorithm asdeveloped by Oliver Exler, Thomas Lehmann and Klaus Schittkowski (NLPQL). This method solves Mixed-Integer Non-Linear Programming (MINLP) of the form:

Minimize:

f x y( )

Subject to:

g � � � m� e( ) = = …

� � � �( ) ≥ = + …

where

� �n nc i∈ ∈R N

� � �l u≤ ≤

� � �� �≤ ≤

The symbols x and y denote the vectors of the continuous and integer variables, respectively. It is as-

sumed that problem functions ( )

and � ( ) = … are continuously differentiable subject

to all ��∈R . It is not assumed that integer variables can be relaxed. In other words, problem functions

are evaluated only at integer points and never at any fractional values in between.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.240

DesignXplorer Theory

Page 249: Design Exploration Users Guide

MISQP solves MINLP by a modified sequential quadratic programming (SQP) method. After linearizingconstraints and constructing a quadratic approximation of the Lagrangian function, mixed-integerquadratic programs are successively generated and solved by an efficient branch-and-cut method. Thealgorithm is stabilized by a trust region method as originally proposed by Yuan for continuous programs.Second order corrections are retained. The Hessian of the Lagrangian function is approximated by BFGSupdates subject to the continuous and integer variables. MISQP is able to solve also non-convex nonlinearmixed-integer programs.

References:

• O. Exler, K. Schittkowski, T. Lehmann (2012): A comparative study of numerical algorithms for nonlinear and

nonconvex mixed-integer optimization, to appear: Mathematical Programming Computation.

• O. Exler, T. Lehmann, K. Schittkowski (2012): MISQP: A Fortran subroutine of a trust region SQP algorithm for

mixed-integer nonlinear programming - user's guide, Report, Department of Computer Science, Universityof Bayreuth.

• O. Exler, K. Schittkowski (2007): A trust region SQP algorithm for mixed-integer nonlinear programming, Op-timization Letters, Vol. 1, p.269-280.

Adaptive Single-Objective Optimization (ASO)

Adaptive Single-Objective is a mathematical optimization method that combines an OSF Design ofExperiments, a Kriging response surface, and the MISQP optimization algorithm. It is a gradient-basedalgorithm based on a response surface which provides a refined, global, optimized result.

ASO optimization supports a single objective, multiple constraints, and is available for continuousparameters and continuous parameters with Manufacturable Values. It does not support the use ofparameter relationships in the optimization domain and is available only for Direct Optimization systems.

Like the MISQP method, this method solves constrained nonlinear programming problems of the form:

Minimize:

=

Subject to:

k ≤ ∀ =

l = ∀ =

where

L U≤ ≤

The purpose is to refine and reduce the domain intelligently and automatically to provide the globalextrema.

Adaptive Single-Objective Workflow

The workflow of the Adaptive Single-Objective optimization method is as follows:

241Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 250: Design Exploration Users Guide

Adaptive Single-Objective Steps

1. OSF Sampling

Optimal Space-Filling Design (OSF) is used for the Kriging construction. In the original OSF, thenumber of samples equals the number of divisions per axis and there is one sample in each division.

When a new OSF is generated after a domain reduction, the reduced OSF has the same numberof divisions as the original and keeps the existing design points within the new bounds. Newdesign points are added until there is a point in each division of the reduced domain.

In the example below, the original domain has eight divisions per axis and contains eight designpoints. The reduced domain also has eight divisions per axis and includes two of the original designpoints. Six new design points need to be added in order to have a design point in each division.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.242

DesignXplorer Theory

Page 251: Design Exploration Users Guide

Note

The total number of design points in the reduced domain can exceed the number inthe original domain if multiple existing points wind up in the same division. In the ex-ample above, if two existing points wound up in the same division of the new domain,seven new design points (rather than six) would have been added in order to have apoint in each of the remaining divisions.

2. Kriging Generation

A response surface is created for each output, based on the current OSF and consequently on thecurrent domain bounds.

For details on the Kriging algorithm, see Kriging (p. 78) or Kriging Algorithms (p. 217).

3. MISQP Algorithm

MISQP is run on the current Kriging response surface to find potential candidates. A few MISQPprocesses are run at the same time, beginning with different starting points, and consequently,giving different candidates.

4. Candidate Point Validation

All the obtained candidates are either validated or not, based on the Kriging error predictor. Thecandidate point is checked to see if further refinement of the Kriging surface will change the selec-tion of this point. A candidate is considered as acceptable if there aren’t any points, according tothis error prediction, that call it into question. If the quality of the candidate is called into question,the domain bounds are reduced; otherwise, the candidate is calculated as a verification point.

• Refinement Point Creation (If the selection will not be changed)

When a new verification point is calculated, it is inserted in the current Kriging as a refinementpoint and the MISQP process is restarted.

• Domain Reduction (If the selection will be changed)

When candidates are validated, new domain bounds must be calculated. If all of the candidatesare in the same zone, the bounds are reduced, centered on the candidates. Otherwise, the bounds

243Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 252: Design Exploration Users Guide

are reduced as an inclusive box of all candidates. At each domain reduction, a new OSF is gen-erated (conserving design points between the new bounds) and a new Kriging is generatedbased on this new OSF.

5. Convergence and Stop Criteria

The optimization is considered to be converged when the candidates found are stable. However,there are four stop criteria that can stop the algorithm: Maximum Number of Evaluations, Max-

imum Number of Domain Reductions, Percentage of Domain Reductions, and Convergence

Tolerance.

Multiple-Objective Optimization Methods

In this section, we’ll discuss both theoretical aspects and the different optimization methods availablefor multi-objective optimization.

Pareto Dominance in Multi-Objective OptimizationConvergence Criteria in MOGA-Based Multi-Objective OptimizationMulti-Objective Genetic Algorithm (MOGA)Adaptive Multiple-Objective Optimization (AMO)

Pareto Dominance in Multi-Objective Optimization

The concept of Pareto dominance is of extreme importance in multi-objective optimization, especiallywhere some or all of the objectives and constraints are mutually conflicting. In such a case, there is nosingle point that yields the "best" value for all objectives and constraints (i.e., the Utopia Point). Instead,the best solutions, often called a Pareto or non-dominated set, are a group of solutions such that selectingany one of them in place of another will always sacrifice quality for at least one objective or constraint,while improving at least one other. Formally, the description of such Pareto optimality for generic op-timization problems can be formulated as in the following equations.

Taking a closer, more formal look at the multi-objective optimization problem, let the following denotethe set of all feasible (i.e., do not violate constraints) solutions:

(62)i u= ∈ ′′ℜ ≥ = ≤ ≤

The problem can then be simplified to:

(63)r

If there exists x' X such that for all objective functions x* is optimal. This, for i = 1, ..., k, is expressed:

(64)� �≤ ∀ ∈

This indicates that x* is certainly a desirable solution. Unfortunately, this is a utopian situation that rarelyexists, as it is unlikely that all f1(x) will have minimum values for X at a common point (x*). The question

is left: What solution should be used? That is, how should an "optimal" solution be defined? First, considerthe so-called ideal (utopian) solution. In order to define this solution, separately attainable minima must

be found for all objective functions. Assuming there is one, let x'1 be the solution of the scalar optimiz-ation problem:

(65)*

� �=∈

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.244

DesignXplorer Theory

Page 253: Design Exploration Users Guide

Here f1' is called the individual minimum for the scalar problem i; the vector i ki* * *= … is called

ideal for a multi-objective optimization problem; and the points in X which determined this vector isthe ideal solution.

It is usually not true that Equation 66 (p. 245) holds, although it would be useful, as the multi-objectiveproblem would have been solved by considering a sequence for scalar problems. It is necessary todefine a new form of optimality, which leads to the concept of Pareto Optimality. Introduced by V.Pareto in 1896, it is still the most important part of multi-objective optimization.

(66)� �

�� � �= …

A point x' X is said to be Pareto Optimal for the problem if there is no other vector x X such thatfor all i = 1, ..., k,

(67)� �

�≤

and, for at least one i {1, ..., k}:

(68)� �

�<

This definition is based on the intuitive conviction that the point x* X is chosen as the optimal if nocriterion can be improved without worsening at least one other criterion. Unfortunately, the Paretooptimum almost always gives not a single solution, but a set of solutions. Usually Pareto optimality isspoken of as being global or local depending on the neighborhood of the solutions X, and in this case,almost all traditional algorithms can at best guarantee a local Pareto optimality. However, this MOGA-based system, which incorporates global Pareto filters, yields the global Pareto front.

Convergence Criteria in MOGA-Based Multi-Objective Optimization

Convergence criteria are the conditions that indicate when the optimization has converged. In theMOGA-based multi-objective optimization methods, the following convergence criteria are available:

Maximum Allowable Pareto Percentage

The Maximum Allowable Pareto Percentage criterion looks for a percentage that represents a specifiedratio of Pareto points per Number of Samples Per Iteration. When this percentage is reached, the optim-ization is converged.

Convergence Stability Percentage

The Convergence Stability Percentage criterion looks for population stability, based on mean andstandard deviation of the output parameters. When a population is stable with regards to the previousone, the optimization is converged. The criterion functions in the following sequence:

• Population 1: When the optimization is run, the first population is not taken into account. Becausethis population was not generated by the MOGA algorithm, it is not used as a range reference for theoutput range (for scaling values).

• Population 2: The second population is used to set the range reference. The minimum, maximum,range, mean, and standard deviation are calculated for this population.

• Populations 3 – 11: Starting from the third population, the minimum and maximum output valuesare used in the next steps to scale the values (on a scale of 0 to 100). The mean variations andstandard deviation variations are checked; if both of these are smaller than the value for the Conver-

gence Stability Percentage property, the algorithm is converged.

245Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 254: Design Exploration Users Guide

So at each iteration and for each active output, convergence occurs if:

Mean Mean

Max Min

S

StdDev StdDev

Max Min

S

� �

� �

−<

−<

1

1

where:

S = Stability Percentage

Meani = Mean of the ith Population

StdDevi = Standard Deviation of the ith Population

Max = Maximum Output Value calculated on the first generated population of MOGAMin = Minimum Output Value calculated on the first generated population of MOGA

Multi-Objective Genetic Algorithm (MOGA)

The MOGA used in GDO is a hybrid variant of the popular NSGA-II (Non-dominated Sorted Genetic Al-gorithm-II) based on controlled elitism concepts. It supports all types of input parameters. The Paretoranking scheme is done by a fast, non-dominated sorting method that is an order of magnitude fasterthan traditional Pareto ranking methods. The constraint handling uses the same non-dominance principleas the objectives, thus penalty functions and Lagrange multipliers are not needed. This also ensuresthat the feasible solutions are always ranked higher than the unfeasible solutions.

The first Pareto front solutions are archived in a separate sample set internally and are distinct fromthe evolving sample set. This ensures minimal disruption of Pareto front patterns already available fromearlier iterations. You can control the selection pressure (and, consequently, the elitism of the process)to avoid premature convergence by altering the Maximum Allowable Pareto Percentage property.(For more information this and other MOGA properties, see Performing a MOGA Optimization (p. 132).)

MOGA Workflow

The workflow of the MOGA optimization method is as follows:

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.246

DesignXplorer Theory

Page 255: Design Exploration Users Guide

dx_samplegen_moga

MOGA Steps

1. First Population of MOGA

The initial population is used to run the MOGA algorithm.

2. MOGA Generates a New Population

MOGA is run and generates a new population via Crossover and Mutation. After the first iteration,each population is run when it reaches the number of samples defined by the Number of Samples

Per Iteration property. For details, see MOGA Steps to Generate a New Population (p. 248).

3. Design Point Update

The design points in the new population are updated.

4. Convergence Validation

The optimization is validated for convergence.

• Yes: Optimization Converged

MOGA converges when the Maximum Allowable Pareto Percentage or the Convergence Sta-

bility Percentage has been reached.

• No: Optimization Not Converged

247Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 256: Design Exploration Users Guide

If the optimization is not converged, the process continues to the next step.

5. Stopping Criteria Validation

If the optimization has not converged, it is validated for fulfillment of stopping criteria.

• Yes: Stopping Criteria Met

When the Maximum Number of Iterations criterion is met, the process is stopped without havingreached convergence.

• No: Stopping Criteria Not Met

If the stopping criteria have not been met, the MOGA is run again to generate a new population(return to Step 2).

6. Conclusion

Steps 2 through 5 are repeated in sequence until the optimization has converged or the stoppingcriteria have been met. When either of these things occurs, the optimization concludes.

MOGA Steps to Generate a New Population

The process MOGA uses to generate a new population is comprised of two main steps: Crossover andMutation.

1. Crossover

Crossover combines (mates) two chromosomes (parents) to produce a new chromosome (offspring).The idea behind crossover is that the new chromosome may be better than both of the parents ifit takes the best characteristics from each of the parents. Crossover occurs during evolution accordingto a user-definable crossover probability.

• Crossover for Continuous Parameters

A crossover operator that linearly combines two parent chromosome vectors to produce two newoffspring according to the following equations:

Offspring Parent Parent

Offspring Pa

= + −( )= ( ) rrent Parent+

Consider the following two parents (each consisting of four floating genes) which have been se-lected for crossover:

( ) ( ) ( ) ( )( ) ( ) ( ) (( )

If a = 0.7, the following two offspring would be produced:

( ) ( ) ( ) ( )( ) ( ) ( ) ( )

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.248

DesignXplorer Theory

Page 257: Design Exploration Users Guide

• Crossover for Discrete / Manufacturable Values Parameters

Each discrete or Manufacturable Values parameter is represented by a binary chain correspondingto the number of levels. For example, a parameter with two values (levels) is encoded to one bit,a parameter with seven values is encoded to three bits, and an n-bits chain will represent aparameter with 2(n-1) values.

The concatenation of these chains forms the chromosome, which will crossover with anotherchromosome.

Three different kinds of crossover are available:

– One Point

A One Point crossover operator that randomly selects a crossover point within a chromosomethen interchanges the two parent chromosomes at this point to produce two new offspring.

– Two Point

A Two Point crossover operator randomly selects two crossover points within a chromosome theninterchanges the two parent chromosomes between these points to produce two new offspring.

– Uniform

A Uniform crossover operator decides (with some probability, which is known as the “mixing ratio”)which parent will contribute each of the gene values in the offspring chromosomes. This allowsthe parent chromosomes to be mixed at the gene level rather than the segment level (as withone and two point crossover). For some problems, this additional flexibility outweighs the disad-vantage of destroying building blocks.

249Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 258: Design Exploration Users Guide

2. Mutation

Mutation alters one or more gene values in a chromosome from its initial state. This can result inentirely new gene values being added to the gene pool. With these new gene values, the geneticalgorithm may be able to arrive at a better solution than was previously possible. Mutation is animportant part of the genetic search, as it helps to prevent the population from stagnating at anylocal optima. Mutation occurs during evolution according to a user-defined mutation probability.

• Mutation for Continuous Parameters

For continuous parameters, a polynomial mutation operator is applied to implement mutation.

C P UpperBound LowerBound

C

= + − δ

PP δ

• Mutation for Discrete / Manufacturable Values Parameters

For discrete or Manufacturable Values parameters, a mutation operator simply inverts the valueof the chosen gene (0 goes to 1 and 1 goes to 0) with a probability of 0.5. This mutation operatorcan only be used for binary genes. The concatenation of these chains forms the chromosome,which will crossover with another chromosome.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.250

DesignXplorer Theory

Page 259: Design Exploration Users Guide

Adaptive Multiple-Objective Optimization (AMO)

Adaptive Multiple-Objective is a mathematical optimization that combines a Kriging response surfaceand the MOGA optimization algorithm. It allows you to either generate a new sample set or use anexisting set, providing a more refined approach than the Screening method. Except when necessary,the optimizer does not evaluate all design points. The general optimization approach is the same asMOGA, but a Kriging response surface is used; part of the population is “simulated” by evaluations ofthe Kriging and the Kriging error predictor reduces the number of evaluations used in finding the firstPareto front solutions.

AMO optimization supports multiple objectives, multiple constraints, and is limited to continuousparameters and continuous parameters with Manufacturable Values. It is available only for Direct Op-timization systems.

Note

AMO does not support discrete parameters because with discrete parameters, it is necessaryto construct a separate response surface for each discrete combination. When discreteparameters are used, MOGA is the more efficient optimization method. For details on howMOGA works, see Multi-Objective Genetic Algorithm (MOGA) (p. 246).

Adaptive Multiple-Objective Workflow

The workflow of the Adaptive Multiple-Objective optimization method is as follows:

251Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 260: Design Exploration Users Guide

Adaptive Multiple-Objective Steps

1. First Population of MOGA

The initial population of MOGA is used for the Kriging construction.

2. Kriging Generation

A Kriging response surface is created for each output, based on the first population and then im-proved during simulation with the addition of new design points.

For details on the Kriging algorithm, see Kriging (p. 78) or Kriging Algorithms (p. 217).

3. MOGA Algorithm

MOGA is run, using the Kriging as an evaluator. After the first iteration, each population is runwhen it reaches the number of samples defined by the Number of Samples Per Iteration property.

4. Evaluate the Population

5. Error Check

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.252

DesignXplorer Theory

Page 261: Design Exploration Users Guide

The Kriging error predictor is checked for each point.

• Yes: Error Acceptable

Each point is validated for error. If the error for a given point is acceptable, the approximatedpoint is included in the next population to be run through the MOGA algorithm (return to Step3).

• No: Error Not Acceptable

If the error is not acceptable, the points are promoted as design points. The new design pointsare used to improve the Kriging (return to Step 2) and are included in the next population tobe run through the MOGA algorithm (return to Step 3).

6. Convergence Validation

The optimization is validated for convergence.

• Yes: Optimization Converged

MOGA converges when the maximum allowable Pareto percentage has been reached. Whenthis happens, the process is stopped.

• No: Optimization Not Converged

If the optimization is not converged, the process continues to the next step.

7. Stopping Criteria Validation

If the optimization has not converged, it is validated for fulfillment of the stopping criteria.

• Yes: Stopping Criteria Met

When the maximum number of iterations has been reached, the process is stopped withouthaving reached convergence.

• No: Stopping Criteria Not Met

If the stopping criteria have not been met, the MOGA algorithm is run again (return to Step 3).

8. Conclusion

Steps 2 through 7 are repeated in sequence until the optimization has converged or the stoppingcriteria have been met. When either of these things occurs, the optimization concludes.

Decision Support Process

The Decision Support Process is a goal-based, weighted, aggregation-based design ranking technique.It is the final step in the optimization, where the optimization is post-processed.

During an optimization, the DesignXplorer optimizer generates a sample set as follows:

253Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 262: Design Exploration Users Guide

• Screening: the sample set corresponds to the number of Screening points plus the Min-Max Searchresults (omitting search results that duplicate existing points)

• Single-objective optimization (NLPQL, MISQP, ASO): the sample set corresponds to the iteration points

• Multiple-objective optimization (MOGA, AMO): the sample set corresponds to the final population

The Decision Support Process sorts the sample set (using the cost function) in order to extract the bestcandidates. The cost function takes into account both the Importance level of objectives and constraintsand the feasibility of points. (The feasibility of a point depends on how constraints are handled. Whenthe Constraint Handling property is set to Relaxed, all unfeasible points are included in the sort; whenthe property is set to Strict, all unfeasible points are removed from the sort.) Once the sample set hasbeen sorted, you can change the Importance level and/or Constraint Handling properties for one ormore constraints or objectives without causing DesignXplorer to create more design points. The DecisionSupport Process will resort the existing sample set.

Given n input parameters, m output parameters, and their individual targets, the collection of objectivesis combined into a single, weighted objective function, Φ, which is sampled by means of a direct MonteCarlo method using uniform distribution. The candidate points are subsequently ranked by ascendingmagnitudes of the values of Φ. The function for Φ (where all continuous input parameters have usablevalues of type "Continuous") is given by the following:

(69)Φ ≡ ∑ + ∑= =

i ii

n

j jj

m

1 1

where:

wi and wj = weights defined in Equation 72 (p. 254)

Ni and Mi = normalized objectives for input and output parameters, respectively

The normalized objectives (metrics) are:

(70)�t

u l �

=−

(71)��

=−

�ax ���

where:

x = current value for input parameter ixt, yt = corresponding "target value"

y = current value for output parameter jxl and xu = lower and upper values, respectively, for input parameter i

ymin and ymax = corresponding lower and upper bounds, respectively, for output parameter j

The fuzziness of the combined objective function derives from the weights w, which are simply definedas follows:

(72)� �= =

�f he Ipor��ce �s "H�gher"

�f he Ipoor��ce �s "Def� �"

�f he Ipor��ce �s "Lower"

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.254

DesignXplorer Theory

Page 263: Design Exploration Users Guide

The labels used are defined in Defining Optimization Objectives and Constraints (p. 153).

The targets represent the desired values of the parameters, and are defined for the continuous inputparameters as follows:

(73)t

l=

if Objec�ive Type is "No Objec�ive"

if Objec�ive Tyype is "Minimize"

f Objec�ive Type is "Seek Tar12 l u i+( ) gge�"

if Objec�ive Type is "Maximize"u

and, for the output parameters we have the following desired values:

(74)�

�, �� ������� �� � �� ��� ��������

� , �� ���������� �� � �� ����������

� , �� C������� �� � �� �V����� < =�*

U �� B���d� ��d �U �� B���d� �� d�����d ��d � �

�, ��

�*≥

CC������� �� � �� �V����� < = U �� B���d� ��d �U �� B����d� �� d�����d ��d � �

� , �� ������� �� � �� ����� �

�*

�*

��� ��

�, �� C������� �� � �� �V����� > = L�w�� B���d� ���d �L�w�� B���d� �� d�����d ��d � �

� , �� C������� ��

�*

�*

� �� �V����� > = L�w�� B���d� ��d �L�w�� B���d� �� d������d ��d � �

� , �� ������� �� ���!������

�, �� C�����

�*

�#$

��� �� � �� �L�w�� B���d <= V����� <= U �� B���d� ��d ��%% �&

�%

<= � <= �

� , �� C������� �� � �� �L�w�� B���d <= V������ <= U �� B���d� ��d �<�

� , �� C������� �� � ��

�%

�% �L�w�� B���d <= V����� <= U �� B���d� ��d �>��&

where:

yt* = user-specified target

yt1 = the constraint lower bound

yt2 = the constraint upper bound

Thus, Equation 72 (p. 254) and Equation 73 (p. 255) constitute the input parameter objectives for thecontinuous input parameters and Equation 72 (p. 254) and Equation 74 (p. 255) constitute the outputparameter objectives and constraints.

The following section considers the case where the continuous input parameters have discrete Manu-facturable values ("Levels" in the parameters' Table view) and there may be discrete input parametersof type "Discrete." Let us consider the case where the Manufacturable Values of the continuous inputparameter are defined as the following:

(75)'= −0 ( (

with l usable values, the following metric is defined:

(76))+

-AX -I.

=−

where, as before,

xMAX = upper bound of the usable values

255Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization

Page 264: Design Exploration Users Guide

xMIN = lower bound of the usable values

The target value xt is given by the following:

(77)t

t

=

if Cons�rain� Type is "No Cons�rain�"

if Cons�rain*

�� Type is "Values <= Upper Bound" and "Upper Bound" is deffined and x

if Cons�rain� Type is "Values <= Upper B

�≥ x�

oound" and "Upper Bound" is defined and x

if Objec�

�≤ x�

t*

iive Type is "Seek Targe�"

if Cons�rain� Type is "Values >>= Lower Bound" and "Lower Bound" is defined and x�

*

≥ x�

t if Cons�rain� Type is "Values >= Lower Bound" and "Lower Bound" is defined and x ≤

x��

Thus, the GDO objective equation becomes the following (for parameters with discrete usable values).

(78)Φ ≡ + += = =∑ ∑ ∑� ��

� ��

m

� ��

1 1 1

Therefore, Equation 71 (p. 254), Equation 72 (p. 254), and Equation 76 (p. 255) constitute the input para-meter objectives for parameters which may be continuous or possess discrete usable values.

The norms, objectives, and constraints as in equations Equation 76 (p. 255) and Equation 77 (p. 256) arealso adopted to define the input goals for the discrete input parameters of the type Discrete; i.e., thosediscrete parameters whose usable alternatives indicate a whole number of some particular design feature(number of holes in a plate, number of stiffeners, etc.).

Thus, equations Equation 71 (p. 254), Equation 72 (p. 254), and Equation 76 (p. 255) constitute the inputparameter goals for "Discrete" parameters.

Therefore, the GDO objective function equation for the most general case (where there are continuousand discrete parameters) can be written as the following:

(79)

Φ ≡ += =∑ ∑� ��

� �( ) ( ) ��������� I���� ��������� ������

++=∑ � ��

�( )M������������� ������ + D�������

where:

n = number of Continuous input parametersm = number of Continuous output parametersl = number of Continuous input parameters with Manufacturable Values Levels and Discrete

parameters

From the normed values it is obvious that the lower the value of Φ, the better the design with respectto the desired values and importances. Thus, a quasi-random uniform sampling of design points is doneby a Hammersley algorithm and the samples are sorted in ascending order of Φ. The desired numberof designs are then drawn from the top of the sorted list. A crowding technique is employed to ensurethat any two sampled design points are not very close to each other in the space of the input parameters.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.256

DesignXplorer Theory

Page 265: Design Exploration Users Guide

Rating Candidate Design Points

Each parameter range is divided into 6 zones, or rating scales. The location of a design candidate valuein the range is measured according to the rating scales. For example, for parameter X with a range of0.9 to 1.1, the rating scale for a design candidate value of 1.0333 is calculated as follows:

(((Absolute(1.0333-1.1))/(1.1-0.9))*6)-(6/2) = 2.01-3 = -1 [one star] (as 0 indicating neutral, negative valuesindicating closer to the target, up to -3; positive value indicating farther away from the target, up to+3)

Following the same procedures, you will get rating scale for design candidate value of 0.9333 as 5.001-3 = +2 [two crosses] (away from target). Therefore, the extreme cases are as follows:

1. Design Candidate value of 0.9 (the worst), the rating scale is 6-3 = +3 [three crosses]

2. Design Candidate value of 1.1 (the best), the rating scale is 0-3 = -3 [three stars]

3. Design Candidate value of 1.0 (neutral), the rating scale is 3-3 = 0 [dash]

Note

Objective-driven parameter values with inequality constraints receive either three stars (theconstraint is met) or three red crosses (the constraint is violated).

Understanding Six Sigma Analysis

A Six Sigma Analysis allows you to determine the extent to which uncertainties in the model affect theresults of an analysis. An uncertainty (or random quantity) is a parameter whose value is impossible todetermine at a given point in time (if it is time-dependent) or at a given location (if it is location-de-pendent). An example is ambient temperature; you cannot know precisely what the temperature willbe one week from now in a given city.

A Six Sigma Analysis uses statistical distribution functions (such as the Gaussian, or normal, distribution,the uniform distribution, etc.) to describe uncertain parameters.

Six Sigma Analysis allows you to determine whether your product satisfies Six Sigma quality criteria. Aproduct has Six Sigma quality if only 3.4 parts out of every 1 million manufactured fail. This qualitydefinition is based on the assumption that an output parameter relevant to the quality and performanceassessment follows a Gaussian distribution, as shown below.

An output parameter that characterizes product performance is typically used to determine whether aproduct's performance is satisfactory. The parameter must fall within the interval bounded by the lowerspecification limit (LSL) and the upper specification limit (USL). Sometimes only one of these limits exists.

257Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis

Page 266: Design Exploration Users Guide

An example of this is a case when the maximum von Mises stress in a component must not exceed theyield strength. The relevant output parameter is, of course, the maximum von Mises stress and the USLis the yield strength. The lower specification limit is not relevant. The area below the probability densityfunction falling outside the specification interval is a direct measure of the probability that the productdoes not conform to the quality criteria, as shown above. If the output parameter does follow a Gaussiandistribution, then the product satisfies a Six Sigma quality criterion if both specification limits are atleast six standard deviations away from the mean value.

In reality, an output parameter rarely exactly follows a Gaussian distribution. However, the definitionof Six Sigma quality is inherently probabilistic -- it represents an admissible probability that parts donot conform to the quality criteria defined by the specified limits. The nonconformance probability canbe calculated no matter which distribution the output parameter actually follows. For distributionsother than Gaussian, the Six Sigma level is not really six standard deviations away from the mean value,but it does represent a probability of 3.4 parts per million, which is consistent with the definition of SixSigma quality.

This section describes the Six Sigma Analysis tool and how to use it to perform a Six Sigma Analysis.Principles (SSA)Guidelines for Selecting SSA VariablesSample GenerationWeighted Latin Hypercube SamplingPostprocessing SSA ResultsSix Sigma Analysis Theory

Principles (SSA)

Computer models are described with specific numerical and deterministic values: material propertiesare entered using certain values, the geometry of the component is assigned a certain length or width,etc. An analysis based on a given set of specific numbers and values is called a deterministic analysis.The accuracy of a deterministic analysis depends upon the assumptions and input values used for theanalysis.

While scatter and uncertainty naturally occur in every aspect of an analysis, deterministic analyses donot take them into account. To deal with uncertainties and scatter, use Six Sigma Analysis, which allowsyou to answer the following questions:

• If the input variables of a finite element model are subject to scatter, how large is the scatter of the outputparameters? How robust are the output parameters? Here, output parameters can be any parameter that

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.258

DesignXplorer Theory

Page 267: Design Exploration Users Guide

ANSYS Workbench can calculate. Examples are the temperature, stress, strain, or deflection at a node, themaximum temperature, stress, strain, or deflection of the model, etc.

• If the output is subject to scatter due to the variation of the input variables, then what is the probabilitythat a design criterion given for the output parameters is no longer met? How large is the probability thatan unexpected and unwanted event takes place (i.e., what is the failure probability)?

• Which input variables contribute the most to the scatter of an output parameter and to the failure prob-ability? What are the sensitivities of the output parameter with respect to the input variables?

Six Sigma Analysis can be used to determine the effect of one or more variables on the outcome of theanalysis. In addition to the Six Sigma Analysis techniques available, ANSYS Workbench offers a set ofstrategic tools to enhance the efficiency of the Six Sigma Analysis process. For example, you can graphthe effects of one input parameter versus an output parameter, and you can easily add more samplesand additional analysis loops to refine your analysis.

In traditional deterministic analyses, uncertainties are either ignored or accounted for by applyingconservative assumptions. You would typically ignore uncertainties if you know for certain that the inputparameter has no effect on the behavior of the component under investigation. In this case, only themean values or some nominal values are used in the analysis. However, in some situations, the influencesof uncertainties exists but is still neglected, as for the thermal expansion coefficient, for which thescatter is usually ignored.

Example 7: Accounting for Uncertainties

If you are performing a thermal analysis and want to evaluate the thermal stresses, the equation is:

σtherm = E α ∆T

because the thermal stresses are directly proportional to the Young's modulus as well as to the thermalexpansion coefficient of the material.

The table below shows the probability that the thermal stresses will be higher than expected, takinguncertainty variables into account.

Probability that the

thermal stresses are more

than 10% higher than ex-

pected

Probability that the

thermal stresses are more

than 5% higher than ex-

pected

Uncertainty variables taken into

account

~2.3%~16%Young's modulus (Gaussian distribu-tion with 5% standard deviation)

~8%~22%Young's modulus and thermal expan-sion coefficient (each with Gaussiandistribution with 5% standard devi-ation)

Reliability, Quality, and Safety Issues

Use Six Sigma Analysis when issues of reliability, quality, and safety are paramount.

Reliability is typically a concern when product or component failures have significant financial con-sequences (costs of repair, replacement, warranty, or penalties) or worse, can result in injury or loss oflife.

259Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis

Page 268: Design Exploration Users Guide

If you use a conservative assumption, the difference in thermal stresses shown above tells you thatuncertainty or randomness is involved. Conservative assumptions are usually expressed in terms ofsafety factors. Sometimes regulatory bodies demand safety factors in certain procedural codes. If youare not faced with such restrictions or demands, then using conservative assumptions and safety factorscan lead to inefficient and costly over-design. By using Six Sigma Analysis methods, you can avoid over-design while still ensuring the safety of the component.

Six Sigma Analysis methods even enable you to quantify the safety of the component by providing aprobability that the component will survive operating conditions. Quantifying a goal is the necessaryfirst step toward achieving it.

Guidelines for Selecting SSA Variables

This section presents useful guidelines for defining your Six Sigma Analysis variables.

Choosing and Defining Uncertainty Variables (p. 260)

Choosing and Defining Uncertainty Variables

First, you should:

• Specify a reasonable range of values for each uncertainty variable.

• Set reasonable limits on the variability for each uncertainty variable.

More information about choosing and defining uncertainty variables can be found in the followingsections.

Uncertainty Variables for Response Surface AnalysesChoosing a Distribution for a Random VariableDistribution Functions

Uncertainty Variables for Response Surface Analyses

The number of simulation loops that are required for a Response Surface analysis depends on thenumber of uncertainty variables. Therefore, you want to select the most important input variable(s),the ones you know have a significant impact on the result parameters. If you are unsure which uncertaintyvariables are important, include all of the random variables you can think of and then perform a MonteCarlo analysis. After you learn which uncertainty variables are important and should be included in yourResponse Surface Analysis, you can eliminate those that are unnecessary.

Choosing a Distribution for a Random Variable

The type and source of the data you have determines which distribution functions can be used or arebest suited to your needs.

Measured DataMean Values, Standard Deviation, Exceedance ValuesNo Data

Measured Data

If you have measured data, then you must first know how reliable that data is. Data scatter is not justan inherent physical effect, but also includes inaccuracy in the measurement itself. You must considerthat the person taking the measurement might have applied a "tuning" to the data. For example, if thedata measured represents a load, the person measuring the load may have rounded the measurement

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.260

DesignXplorer Theory

Page 269: Design Exploration Users Guide

values; this means that the data you receive are not truly the measured values. The amount of thistuning could provide a deterministic bias in the data that you need to address separately. If possible,you should discuss any bias that might have been built into the data with the person who providedthat data to you.

If you are confident about the quality of the data, then how you proceed depends on how much datayou have. In a single production field, the amount of data is typically sparse. If you have only a smallamount of data, use it only to evaluate a rough figure for the mean value and the standard deviation.In these cases, you could model the uncertainty variable as a Gaussian distribution if the physical effectyou model has no lower and upper limit, or use the data and estimate the minimum and maximumlimit for a uniform distribution.

In a mass production field, you probably have a lot of data. In these cases you could use a commercialstatistical package that will allow you to actually fit a statistical distribution function that best describesthe scatter of the data.

Mean Values, Standard Deviation, Exceedance Values

The mean value and the standard deviation are most commonly used to describe the scatter of data.Frequently, information about a physical quantity is given as a value such as "100±5.5." Often, this formmeans that the value "100" is the mean value and "5.5" is the standard deviation. Data in this form impliesa Gaussian distribution, but you must verify this (a mean value and standard deviation can be providedfor any collection of data regardless of the true distribution type). If you have more information, forexample, you know that the data is lognormal distributed, then Six Sigma Analysis allows you to usethe mean value and standard deviation for a lognormal distribution.

Sometimes the scatter of data is also specified by a mean value and an exceedance confidence limit.The yield strength of a material is sometimes given in this way; for example, a 99% exceedance limitbased on a 95% confidence level is provided. This means that, from the measured data, we can be sureby 95% that in 99% of all cases the property values will exceed the specified limit and only in 1% of allcases will they drop below the specified limit. The supplier of this information is using the mean value,the standard deviation, and the number of samples of the measured data to derive this kind of inform-ation. If the scatter of the data is provided in this way, the best way to pursue this further is to ask formore details from the data supplier. Because the given exceedance limit is based on the measured dataand its statistical assessment, the supplier might be able to provide you with the details that were used.

If the data supplier does not give you any further information, then you could consider assuming thatthe number of measured samples was large. If the given exceedance limit is denoted with x1 - α/2 and

the given mean value is denoted with xµ, then the standard deviation can be derived from the equation:

σµα1 2− −

/

where the values for the coefficient C are:

CExceedance Probability

2.575899.5%

2.326399.0%

1.960097.5%

1.644995.0%

261Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis

Page 270: Design Exploration Users Guide

CExceedance Probability

1.281690.0%

No Data

In situations where no information is available, there is never just one right answer. Below are hintsabout which physical quantities are usually described in terms of which distribution functions. This in-formation might help you with the particular physical quantity you have in mind. Also below is a listof which distribution functions are usually used for which kind of phenomena. Keep in mind that youmight need to choose from multiple options.

Geometric Tolerances

• If you are designing a prototype, you could assume that the actual dimensions of the manufactured partswould be somewhere within the manufacturing tolerances. In this case it is reasonable to use a uniformdistribution, where the tolerance bounds provide the lower and upper limits of the distribution function.

• If the manufacturing process generates a part that is outside the tolerance band, one of two things mayhappen: the part must either be fixed (reworked) or scrapped. These two cases are usually on oppositeends of the tolerance band. An example of this is drilling a hole. If the hole is outside the tolerance band,but it is too small, the hole can just be drilled larger (reworked). If, however, the hole is larger than thetolerance band, then the problem is either expensive or impossible to fix. In such a situation, the parametersof the manufacturing process are typically tuned to hit the tolerance band closer to the rework side,steering clear of the side where parts need to be scrapped. In this case, a Beta distribution is more appro-priate.

• Often a Gaussian distribution is used. The fact that the normal distribution has no bounds (it spans minusinfinity to infinity), is theoretically a severe violation of the fact that geometrical extensions are describedby finite positive numbers only. However, in practice, this lack of bounds is irrelevant if the standard de-viation is very small compared to the value of the geometric extension, as is typically true for geometrictolerances.

Material Data

• Very often the scatter of material data is described by a Gaussian distribution.

• In some cases the material strength of a part is governed by the "weakest-link theory." The "weakest-linktheory" assumes that the entire part will fail whenever its weakest spot fails. For material properties wherethe "weakest-link" assumptions are valid, the Weibull distribution might be applicable.

• For some cases, it is acceptable to use the scatter information from a similar material type. For example,if you know that a material type very similar to the one you are using has a certain material property witha Gaussian distribution and a standard deviation of ±5% around the measured mean value, then you canassume that for the material type you are using, you only know its mean value. In this case, you couldconsider using a Gaussian distribution with a standard deviation of ±5% around the given mean value.

Load Data

For loads, you usually only have a nominal or average value. You could ask the person who providedthe nominal value the following questions: Out of 1000 components operated under real life conditions,what is the lowest load value any one of the components sees? What is the most likely load value? Thatis, what is the value that most of these 1000 components are subject to? What is the highest load valueany one component would be subject to? To be safe you should ask these questions not only of theperson who provided the nominal value, but also to one or more experts who are familiar with how

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.262

DesignXplorer Theory

Page 271: Design Exploration Users Guide

your products are operated under real-life conditions. From all the answers you get, you can thenconsolidate what the minimum, the most likely, and the maximum value probably is. As verification,compare this picture with the nominal value that you would use for a deterministic analysis. The nom-inal value should be close to the most likely value unless using a conservative assumption. If the nom-inal value includes a conservative assumption (is biased), then its value is probably close to the maximumvalue. Finally, you can use a triangular distribution using the minimum, most likely, and maximum valuesobtained.

Distribution Functions

Beta Distribution

fX(x)

x

xmin

xmax

r,t

You provide the shape parameters r and t and the distribution lower bound and upper bound xmin and

xmax of the random variable x.

The Beta distribution is very useful for random variables that are bounded at both sides. If linear oper-ations are applied to random variables that are all subjected to a uniform distribution, then the resultscan usually be described by a Beta distribution. For example, if you are dealing with tolerances andassemblies where the components are assembled and the individual tolerances of the componentsfollow a uniform distribution (a special case of the Beta distribution), the overall tolerances of the as-sembly are a function of adding or subtracting the geometrical extension of the individual components(a linear operation). Hence, the overall tolerances of the assembly can be described by a Beta distribution.Also, as previously mentioned, the Beta distribution can be useful for describing the scatter of individualgeometrical extensions of components as well.

Exponential Distribution

�����

����

λ

You provide the decay parameter λ and the shift (or distribution lower bound) xmin of the random

variable x.

263Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis

Page 272: Design Exploration Users Guide

The exponential distribution is useful in cases where there is a physical reason that the probabilitydensity function is strictly decreasing as the uncertainty variable value increases. The distribution ismostly used to describe time-related effects; for example, it describes the time between independentevents occurring at a constant rate. It is therefore very popular in the area of systems reliability andlifetime-related systems reliability, and it can be used for the life distribution of non-redundant systems.Typically, it is used if the lifetime is not subjected to wear-out and the failure rate is constant with time.Wear-out is usually a dominant life-limiting factor for mechanical components that would preclude theuse of the exponential distribution for mechanical parts. However, where preventive maintenance ex-changes parts before wear-out can occur, then the exponential distribution is still useful to describethe distribution of the time until exchanging the part is necessary.

Gaussian (Normal) Distribution

fX(x)

2

µx

σ

You provide values for the mean value µ and the standard deviation σ of the random variable x.

The Gaussian, or normal, distribution is a fundamental and commonly-used distribution for statisticalmatters. It is typically used to describe the scatter of the measurement data of many physical phenomena.Strictly speaking, every random variable follows a normal distribution if it is generated by a linearcombination of a very large number of other random effects, regardless which distribution these randomeffects originally follow. The Gaussian distribution is also valid if the random variable is a linear combin-ation of two or more other effects if those effects also follow a Gaussian distribution.

Lognormal Distribution

�����

ξδ

You provide values for the logarithmic mean value ξ and the logarithmic deviation δ. The parametersξ and δ are the mean value and standard deviation of ln(x):

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.264

DesignXplorer Theory

Page 273: Design Exploration Users Guide

ξ δπ σ

ξδ⋅ ⋅

2

The lognormal distribution is another basic and commonly-used distribution, typically used to describethe scatter of the measurement data of physical phenomena, where the logarithm of the data wouldfollow a normal distribution. The lognormal distribution is suitable for phenomena that arise from themultiplication of a large number of error effects. It is also used for random variables that are the resultof multiplying two or more random effects (if the effects that get multiplied are also lognormally dis-tributed). It is often used for lifetime distributions such as the scatter of the strain amplitude of a cyclicloading that a material can endure until low-cycle-fatigue occurs.

Uniform Distribution

fX(x)

x

xmin xmax

You provide the distribution lower bound and upper bound xmin and xmax of the random variable x.

The uniform distribution is a fundamental distribution for cases where the only information availableis a lower and an upper bound. It is also useful to describe geometric tolerances. It can also be used incases where any value of the random variable is as likely as any other within a certain interval. In thissense, it can be used for cases where "lack of engineering knowledge" plays a role.

Triangular Distribution

�����

���� ������lv

You provide the minimum value (or distribution lower bound) xmin, the most likely value limit xmlv and

the maximum value (or distribution upper bound) xmax.

The triangular distribution is most helpful to model a random variable when actual data is not available.It is very often used to capture expert opinions, as in cases where the only data you have are the well-founded opinions of experts. However, regardless of the physical nature of the random variable you

265Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis

Page 274: Design Exploration Users Guide

want to model, you can always ask experts questions like "Out of 1000 components, what are the lowestand highest load values for this random variable?" and other similar questions. You should also includean estimate for the random variable value derived from a computer program, as described above. Formore details, see Choosing a Distribution for a Random Variable (p. 260).

Truncated Gaussian (Normal) Distribution

fX(x)

2σG

µG

x

xmin xmax

You provide the mean value µ and the standard deviation σ of the non-truncated Gaussian distributionand the truncation limits xmin and xmax (or distribution lower bound and upper bound).

The truncated Gaussian distribution typically appears where the physical phenomenon follows a Gaus-sian distribution, but the extreme ends are cut off or are eliminated from the sample population byquality control measures. As such, it is useful to describe the material properties or geometric tolerances.

Weibull Distribution

�����

����

�,�chr

You provide the Weibull characteristic value xchr , the Weibull exponent m, and the minimum value xmin

(or distribution lower bound). There are several special cases. For xmin = 0 the distribution coincides

with a two-parameter Weibull distribution. The Rayleigh distribution is a special case of the Weibulldistribution with α = xchr - xmin and m = 2.

In engineering, the Weibull distribution is most often used for strength or strength-related lifetimeparameters, and is the standard distribution for material strength and lifetime parameters for very brittlematerials (for these very brittle materials, the "weakest-link theory" is applicable). For more details, seeChoosing a Distribution for a Random Variable (p. 260).

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.266

DesignXplorer Theory

Page 275: Design Exploration Users Guide

Sample Generation

For Six Sigma Analysis, the sample generation by default is based on the Latin Hypercube Sampling(LHS) technique. (You can also select Weighted Latin Hypercube Sampling (WLHS) as the Sampling Typein the Six Sigma Properties view.) The LHS technique is a more advanced and efficient form of MonteCarlo analysis methods. In a Latin Hypercube Sampling, the points are randomly generated in a squaregrid across the design space, but no two points share input parameters of the same value (i.e., so nopoint shares a row or a column of the grid with any other point. Generally, the LHS technique requires20% to 40% fewer simulations loops than the Direct Monte Carlo analysis technique to deliver the sameresults with the same accuracy. However, that number is largely problem-dependent.

Weighted Latin Hypercube Sampling

In some cases, a very small probability of failure (Pf ), e.g., in the order of 10-6, is of interest in engineeringdesign. It would require 100/Pf simulations/runs of regular Latin Hypercube samples. To reduce numberof runs, the Weighted Latin Hypercube Sampling (WLHS) may be used to generate biased/more samplesin the region.

In WLHS, the input variables are discretized unevenly/unequally in their design space. The cell (or hy-percube, in multiple dimensions) size of probability of occurrence is evaluated according to topologyof output/response. The cell size of probability of occurrence is discretized such that it is smaller (relat-ively) around minimum and maximum of output/response. As evaluation of cell size is output/responseoriented, WLHS is somewhat unsymmetrical/biased.

In general, the WLHS is intended to stretch distribution farther out in the tails (lower and upper) withless number of runs than Latin Hypercube Sampling (LHS). In other words, given same number of runs,WLHS is expected to reach a smaller Pf when compared to LHS. Due to biasness, however, the evaluatedPf may subject to some difference compared to LHS.

Postprocessing SSA Results

• Histogram (p. 267)

• Cumulative Distribution Function (p. 268)

• Probability Table (p. 269)

• Statistical Sensitivities in a Six Sigma Analysis (p. 270)

Histogram

A histogram plot is most commonly used to visualize the scatter of a Six Sigma Analysis variable. Ahistogram is derived by dividing the range between the minimum value and the maximum value intointervals of equal size. Then Six Sigma Analysis determines how many samples fall within each interval,that is, how many "hits" landed in each interval.

Six Sigma Analysis also allows you to plot histograms of your uncertainty variables so you can double-check that the sampling process generated the samples according to the distribution function youspecified. For uncertainty variables, Six Sigma Analysis not only plots the histogram bars, but also acurve for values derived from the distribution function you specified. Visualizing histograms of the un-certainty variables is another way to verify that enough simulation loops have been performed. If thenumber of simulation loops is sufficient, the histogram bars will:

267Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis

Page 276: Design Exploration Users Guide

• Be close to the curve that is derived from the distribution function

• Be "smooth" (without large "steps")

• Not have major gaps (no hits in an interval where neighboring intervals have many hits)

However, if the probability density function is flattening out at the far ends of a distribution (for example,the exponential distribution flattens out for large values of the uncertainty variable) then there mightlogically be gaps. Hits are counted only as positive integer numbers and as these numbers graduallyget smaller, a zero hit can happen in an interval.

Cumulative Distribution Function

The cumulative distribution function is a primary review tool if you want to assess the reliability or thefailure probability of your component or product. Reliability is defined as the probability that no failureoccurs. Hence, in a mathematical sense, reliability and failure probability are different ways of examiningthe same problem and numerically they complement each other (they sum to 1.0). The cumulativedistribution function value at any given point expresses the probability that the respective parametervalue will remain below that point.

The value of the cumulative distribution function at the location x0 is the probability that the values of

X stay below x0. Whether this probability represents the failure probability or the reliability of your

component depends on how you define failure; for example, if you design a component such that acertain deflection should not exceed a certain admissible limit, then a failure event occurs if the criticaldeflection exceeds this limit. Thus, for this example, the cumulative distribution function is interpretedas the reliability curve of the component. On the other hand, if you design a component such that theeigenfrequencies are beyond a certain admissible limit, then a failure event occurs if an eigenfrequencydrops below this limit. So for this example, the cumulative distribution function is interpreted as thefailure probability curve of the component.

The cumulative distribution function also lets you visualize what the reliability or failure probabilitywould be if you chose to change the admissible limits of your design.

A cumulative distribution function plot is an important tool to quantify the probability that the designof your product does or does not satisfy quality and reliability requirements. The value of a cumulativedistribution function of a particular output parameter represents the probability that the output para-meter will remain below a certain level as indicated by the values on the x-axis of the plot.

Example 8: Illustration of Cumulative Distribution Function

The probability that the Shear Stress Maximum will remain less than a limit value of 1.71E+5 is about93%, which also means that there is a 7% probability that the Shear Stress Maximum will exceed thelimit value of 1.71E+5.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.268

DesignXplorer Theory

Page 277: Design Exploration Users Guide

Figure 10: Cumulative Distribution Function

For more information, see Six Sigma Analysis Theory (p. 271).

Probability Table

Instead of reading data from the cumulative distribution chart, you can also obtain important informationabout the cumulative distribution function in tabular form. A Probability Table is available that is de-signed to provide probability values for an even spread of levels of an input or output parameter. Youcan view the table in either Quantile-Percentile (Probability) mode or Percentile-Quantile (InverseProbability) mode. The Probability Table lets you find out the parameter levels corresponding to prob-ability levels that are typically used for the design of reliable products. If you want to see the probabilityof a value that is not listed, you can add it to the table. Likewise, you can add a probability or sigma-level and see the corresponding values. You can also delete values from the table. For more information,see Using Statistical Postprocessing (p. 181).

Figure 11: Probability Tables

269Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis

Page 278: Design Exploration Users Guide

Note

Both tables will have more rows, i.e., include more data, if the number of samples is increased.If you are designing for high product reliability, i.e., a low probability that it does not conformto quality or performance requirements, then the sample size must be adequately large toaddress those low probabilities. Typically, if your product does not conform to the require-ments denoted with "Preq," then the minimum number of samples should be determinedby: Nsamp = 10.0 / Preq.

For example, if your product has a probability of Preq=1.0 e-4 that it does not conform to

the requirements, then the minimum number of samples should be Nsamp = 10.0 / 1.0e-4

= 10e+5.

Statistical Sensitivities in a Six Sigma Analysis

The available sensitivities plots allow you to efficiently improve your design toward a more reliable andbetter quality design, or to save money in the manufacturing process while maintaining the reliabilityand quality of your product. You can view a sensitivities plot for any output parameter in your model.See Sensitivities Chart (SSA) (p. 49) for more information.

The sensitivities available under the Six Sigma Analysis and the Goal Driven Optimization views arestatistical sensitivities. Statistical sensitivities are global sensitivities, whereas the parameter sensitivitiesavailable under the Responses view are local sensitivities. The global, statistical sensitivities are basedon a correlation analysis using the generated sample points, which are located throughout the entirespace of input parameters. The local parameter sensitivities are based on the difference between theminimum and maximum value obtained by varying one input parameter while holding all other inputparameters constant. As such, the values obtained for local parameter sensitivities depend on the valuesof the input parameters that are held constant. Global, statistical sensitivities do not depend on thevalues of the input parameters, because all possible values for the input parameters are already takeninto account when determining the sensitivities.

Design exploration displays sensitivities as both a bar chart and pie chart. The charts describe thesensitivities in an absolute fashion (taking the signs into account); a positive sensitivity indicates thatincreasing the value of the uncertainty variable increases the value of the result parameter for which

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.270

DesignXplorer Theory

Page 279: Design Exploration Users Guide

the sensitivities are plotted. Conversely, a negative sensitivity indicates that increasing the uncertaintyvariable value reduces the result parameter value.

Using a sensitivity plot, you can answer the following important questions.

How can I make the component more reliable or improve its quality?

If the results for the reliability or failure probability of the component do not reach the expected levels,or if the scatter of an output parameter is too wide and therefore not robust enough for a qualityproduct, then you should make changes to the important input variables first. Modifying an inputvariable that is insignificant would be a waste of time.

Of course, you are not in control of all uncertainty parameters. A typical example where you have verylimited means of control involves material properties. For example, if it turns out that the environmentaltemperature (outdoor) is the most important input parameter, then there is probably nothing you cando. However, even if you find out that the reliability or quality of your product is driven by parametersthat you cannot control, this data has importance -- it is likely that you have a fundamental flaw in yourproduct design! You should watch for influential parameters like these.

If the input variable you want to tackle is a geometry-related parameter or a geometric tolerance, thenimproving the reliability and quality of your product means that it might be necessary to change to amore accurate manufacturing process or use a more accurate manufacturing machine. If it is a materialproperty, then there might be nothing you can do about it. However, if you only had a few measurementsfor a material property and consequently used only a rough guess about its scatter, and the materialproperty turns out to be an important driver of product reliability and quality, then it makes sense tocollect more raw data.

How can I save money without sacrificing the reliability or the quality of the product?

If the results for the reliability or failure probability of the component are acceptable or if the scatterof an output parameter is small and therefore robust enough for a quality product, then there is usuallythe question of how to save money without reducing the reliability or quality. In this case, you shouldfirst make changes to the input variables that turned out to be insignificant, because they do not affectthe reliability or quality of your product. If it is the geometrical properties or tolerances that are insigni-ficant, you can consider applying a less expensive manufacturing process. If a material property turnsout to be insignificant, then this is not typically a good way to save money, because you are usuallynot in control of individual material properties. However, the loads or boundary conditions can be apotential for saving money, but in which sense this can be exploited is highly problem-dependent.

Six Sigma Analysis Theory

The purpose of a Six Sigma Analysis is to gain an understanding of the impact of uncertainties associatedwith the input parameter of your design. This goal is achieved using a variety of statistical measuresand postprocessing tools.

Statistical Postprocessing

Convention: Set of data xi.

1. Mean Value

Mean is a measure of average for a set of observations. The mean of a set of n observations isdefined as follows:

271Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis

Page 280: Design Exploration Users Guide

(80)µ^ ==∑1

ii

n

2. Standard Deviation

Standard deviation is a measure of dispersion from the mean for a set of observations. The standarddeviation of a set of n observations is defined as follows:

(81)σ µ� �=−

−=∑ 2

��

3. Sigma Level

Sigma level is calculated as the inverse cumulative distribution function of a standard Gaussian dis-tribution at a given percentile. Sigma level is used in conjunction with standard deviation to measuredata dispersion from the mean. For example, for a pair of quantile X and sigma level nα, it means

that value X is about nα standard deviations away from the sample mean.

4. Skewness

Skewness is a measure of degree of asymmetry around the mean for a set of observations. The ob-servations are symmetric if distribution of the observations looks the same to the left and right ofthe mean. Negative skewness indicates the distribution of the observations being left-skewed. Pos-itive skewness indicates the distribution of the observations being right-skewed. The skewness ofa set of n observations is defined as follows:

(82)γµ

σ�

=− −

=

∑ �

3

5. Kurtosis

Kurtosis is a measure of relative peakedness/flatness of distribution for a set of observations. It isgenerally a relative comparison with the normal distribution. Negative kurtosis indicates a relativelyflat distribution of the observations compared to the normal distribution, while positive kurtosisindicates a relatively peaked distribution of the observations. As such, the kurtosis of a set of n ob-servations is defined with calibration to the normal distribution as follows:

(83)κµ

σ�

=+

− − −−

=∑

4

−−

− −

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.272

DesignXplorer Theory

Page 281: Design Exploration Users Guide

where µ^

and σ�

represent mean and standard deviation, respectively.

Table 1: Different Types of Kurtosis

κ > 0κ = 0κ < 0

κ < 0 for flat distributionκ = 0 for normal distributionκ > 0 for peaked distribution

6. Shannon Entropy

Entropy is a concept first introduced in classical thermodynamics. It is a measure of disorder in asystem. The concept of entropy was later extended and recognized by Claude Shannon in the domainof information/communication as a measure of uncertainty over the true content of a message. Instatistics, the entropy is further reformulated as a function of mass density, as follows:

(84)i i= −∑

where Pi represents mass density of a parameter. In the context of statistics/probability, entropy

becomes a measure of complexity and predictability of a parameter. A more complex, and lesspredictable, parameter carries higher entropy, and vice versa. For example, a parameter characterizedwith uniform distribution has higher entropy than that with truncated triangular distribution (withinthe same bounds). Hence, the parameter characterized with uniform distribution is less predictablecompared to the triangular distribution.

(85)��=

⋅ ∆

The probability mass is used in a normalized fashion, such that not only the shape, but the rangeof variability, or the distribution is accounted for. This is shown in Equation 85 (p. 273), where Ni/N

is the relative frequency of the parameter falling into a certain interval, and ∆w is the width of theinterval. As a result, Shannon entropy can have a negative value. Below are some comparisons ofthe Shannon entropy, where S2 is smaller than S1.

7. Taguchi Signal-to-Noise Ratios

Three signal-to-noise ratios are provided in the statistics of each output in your SSA. These ratiosare as follows:

• Nominal is Best

273Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis

Page 282: Design Exploration Users Guide

• Smaller is Better

• Larger is Better

Signal-to-noise (S/N) ratios are measures used to optimize control parameters in order to achieve arobust design. These measures were first proposed by Dr. Taguchi of Nippon Telephone and TelegraphCompany, Japan, to reduce design noises in manufacturing processes. These design noises are nor-mally expressed in statistical terms such as mean and standard deviation (or variance). In computeraided engineering (CAE), these ratios have been widely used to achieve a robust design in computersimulations. For a design to be robust, the simulations are carried out with an objective to minimizethe variances. The minimum variance of designs/simulations can be done with or without targetinga certain mean. In design exploration, minimum variance targeted at a certain mean (called Nominalis Best) is provided, and is given as follows:

(86)ηµσ

=^

^

where µ�

and σ�

represent mean and standard deviation, respectively.

Nominal is Best is a measure used for characterizing design parameters such as model dimensionin a tolerance design, in which a specific dimension is required, with an acceptable standard deviation.

In some designs, however, the objectives are to seek a minimum or a maximum possible at the priceof any variance.

• For the cases of minimum possible (Smaller is Better), which is a measure used for characterizing outputparameters such as model deformation, the S/N is expressed as follows:

(87)η = − ∑

=

2

1i

i

n

• For the cases of maximum possible (Larger is Better), which is a measure used for characterizing outputparameters such as material yield, the S/N is formulated as follows:

(88)η = − ∑

= �

��

These three S/N ratios are mutually exclusive. Only one of the ratios can be optimized for any givenparameter. For a better design, these ratios should be maximized.

8. Minimum and Maximum Values

The minimum and maximum values of a set of n observations are:

(89)�m�� = �

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.274

DesignXplorer Theory

Page 283: Design Exploration Users Guide

(90)nmax = 1 2

Note

The minimum and maximum values strongly depend on the number of samples. If yougenerate a new sample set with more samples, then chances are that the minimum valuewill be lower in the larger sample set. Likewise, the maximum value of the larger sampleset will most likely be higher than for the original sample set. Hence, the minimum andmaximum values should not be interpreted as absolute physical bounds.

9. Statistical Sensitivity Measures

The sensitivity charts displayed under the Six Sigma Analysis view are global sensitivities basedon statistical measures (see Single Parameter Sensitivities). Generally, the impact of an input para-meter on an output parameter is driven by:

• The amount by which the output parameter varies across the variation range of an input parameter.

• The variation range of an input parameter. Typically, the wider the variation range is, the larger theimpact of the input parameter will be.

The statistical sensitivities used under Six Sigma Analysis are based on the Spearman-Rank OrderCorrelation coefficients that take both those aspects into account at the same time.

Basing sensitivities on correlation coefficients follows the concept that the more strongly an outputparameter is correlated with a particular input parameter, the more sensitive it is with respect tochanges of that input parameter.

10. Spearman Rank-Order Correlation Coefficient

The Spearman rank-order correlation coefficient is:

(91)s

i ii

ii

ii

�=

− −

− −

∑ ∑� �

where:

Ri = rank of xi within the set of observations � � ��

T� �...

Si = rank of yi within the set of observations y y y�

��

R S, = average ranks of a Ri and Si respectively

11. Significance of Correlation Coefficients

Since the sample size n is finite, the correlation coefficient rp is a random variable itself. Hence, the

correlation coefficient between two random variables X and Y usually yields a small, but nonzerovalue, even if X and Y are not correlated at all in reality. In this case, the correlation coefficient would

275Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis

Page 284: Design Exploration Users Guide

be insignificant. Therefore, we need to find out if a correlation coefficient is significant or not. Todetermine the significance of the correlation coefficient, we assume the hypothesis that the correl-ation between X and Y is not significant at all, i.e., they are not correlated and rp = 0 (null hypothesis).

In this case the variable:

(92)P

P

=−

− 2

is approximately distributed like the student's t-distribution with ν = n - 2 degrees of freedom. Thecumulative distribution function student's t-distribution is:

(93)t

t

νν

ν ν

ν

=

+

−+

−∫

1

where:

B(...) = complete Beta function

There is no closed-form solution available for Equation 93 (p. 276). See Abramowitz and Stegun(Pocketbook of Mathematical Functions, abridged version of the Handbook of Mathematical Functions,Harry Deutsch, 1984) for more details.

The larger the correlation coefficient rp, the less likely it is that the null hypothesis is true. Also, the

larger the correlation coefficient rp, the larger is the value of t from Equation 92 (p. 276) and con-

sequently also the probability A(t|ν) is increased. Therefore, the probability that the null hypothesisis true is given by 1-A(t|ν). If 1-A(t|ν) exceeds a certain significance level, for example 2.5%, then wecan assume that the null hypothesis is true. However, if 1-A(t|ν) is below the significance level, thenit can be assumed that the null hypotheses is not true and that consequently the correlation coeffi-cient rp is significant. This limit can be changed in Design Exploration Options (p. 19).

12. Cumulative Distribution Function

The cumulative distribution function of sampled data is also called the empirical distribution function.To determine the cumulative distribution function of sampled data, you need to order the samplevalues in ascending order. Let xi be the sampled value of the random variable X having a rank of i,

i.e., being the ith smallest out of all n sampled values. The cumulative distribution function Fi that

corresponds to xi is the probability that the random variable X has values below or equal to xi. Since

we have only a limited amount of samples, the estimate for this probability is itself a random variable.According to Kececioglu (Reliability Engineering Handbook, Vol. 1, 1991, Prentice-Hall, Inc.), the cumu-lative distribution function Fi associated with xi is:

(94)ik

in k

k i

n

−− =−

=∑

Equation 94 (p. 276) must be solved numerically.

13. Probability and Inverse Probability

The cumulative distribution function of sampled data can only be given at the individual sampledvalues x1, x2, ..., xi, xi+1, ..., xn using Equation 94 (p. 276). Hence, the evaluation of the probability that

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.276

DesignXplorer Theory

Page 285: Design Exploration Users Guide

the random variable is less than or equal to an arbitrary value x requires an interpolation betweenthe available data points.

If x is, for example, between xi and xi+1, then the probability that the random variable X is less or

equal to x is:

(95)i i ii

i i

≤ = + −−−+

+1

1

The cumulative distribution function of sampled data can only be given at the individual sampledvalues x1, x2, ..., xi, xi+1, ..., xn using Equation 94 (p. 276). Hence, the evaluation of the inverse cumu-

lative distribution function for any arbitrary probability value requires an interpolation between theavailable data points.

The evaluation of the inverse of the empirical distribution function is most important in the tails ofthe distribution, where the slope of the empirical distribution function is flat. In this case, a directinterpolation between the points of the empirical distribution function similar to Equation 95 (p. 277)

can lead to inaccurate results. Therefore, the inverse standard normal distribution function Φ-1 isapplied for all probabilities involved in the interpolation. If p is the requested probability for whichwe are looking for the inverse cumulative distribution function value, and p is between Fi and Fi+1,

then the inverse cumulative distribution function value can be calculated using:

(96)� � ��

� �

= + −−

+ −+

− −

− −�

� �

� �

Φ Φ

Φ Φ

277Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis

Page 286: Design Exploration Users Guide

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.278

Page 287: Design Exploration Users Guide

Troubleshooting

An ANSYS DesignXplorer Update operation returns an error message saying that one or several design

points have failed to update.

Click the Show Details button in the error dialog to see the list of design points that failed to updateand the associated error messages. Each failed design point is automatically preserved for you at theproject level, so you can Return to the Project and Edit the Parameter Set cell in order to see thefailed design points and investigate the reason for the failures.

This error means that the project failed to update correctly for the parameter values defined in thelisted design points. There may be a variety of failure reasons, from a lack of a license to a CADmodel generation failure, for instance. If you try to update the system again, only the failed designpoints will be attempted for a new Update, and if the update is successful, the operation will com-plete. If there are persistent failures with a design point, copy its values to the Current Design Point,attempt a Project Update, and edit the cells in the project that are not correctly updated in orderto investigate.

When you open a project, one or more DesignXplorer components are marked with a black "X" icon

on the Project Schematic.

The black “X” icon next to a DesignXplorer component indicates a component Status of Disabled. Thisstatus is given to DesignXplorer components that fail to load properly because the project has beencorrupted by the absence of necessary files (most often the parameters.dxdb and paramet-ers.params files). The Messages view displays a warning message indicating the reason for the failureand listing each of the disabled components.

When a DesignXplorer component is disabled, you cannot Update, Edit, or Preview the component.Try one of the following methods to address the issue.

Method 1: (recommended) Navigate to the Files view, locate the missing files if possible, andcopy them into the correct project folder (the parameters.dxdb and parameters.paramsfiles are normally located in the subdirectory <project name>_files\dpall\global\DX ).With the necessary files restored, the disabled components should load successfully.

Method 2: Delete all DesignXplorer systems containing disabled components. Once the projectis free of corrupted components, you can save the project and set up new DesignXplorer systems.

Workbench interacts with the Excel application.

When an Excel file is added to an Excel component in a project, an instance of the Excel application islaunched in background. Then, if you either double-click on any other Excel workbook in the WindowsExplorer window or click on any other Excel workbook from the Recent items option on the Start Menuin order to open it, this action attempts to use the instance of the Excel application that is already running.As a result, all of the background workbooks are displayed in foreground and any update of the Excelcomponents in Workbench prevents the user from modifying any opened Excel workbooks. As such, itis recommended that to open or view a different workbook, you do not use the same instance of Excelthat Workbench is using.

To open or view a different Excel workbook (other than the ones added to the project):

279Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 288: Design Exploration Users Guide

1. Start a new instance of Excel via the Start menu (i.e., Start > All Programs > Microsoft Office >

Microsoft Excel).

2. From the new instance of Excel, open the workbook via the File > Open menu option.

The workbook will open in the new instance of Excel.

Note

You can still view the Excel workbooks added to the project by selecting Open File in

Excel from the right-click context menu.

Fatal error causes project to get stuck in a Pending state.

If a fatal error occurs while a cell of DesignXplorer is either in a Pending state or resuming from a pendingupdate, the project could be stuck in the Pending state even if the pending updates are no longer running.

To recover from this situation, you can abandon the pending updates by selecting the Abandon

Pending Updates option from the Tools menu.

Incorrect derived parameter definition causes a DesignXplorer component to be in an Unfulfilled state.

Derived parameters must be correctly defined in order to update a DesignXplorer components or aproject. If a derived parameter has an invalid parameter definition, which can be caused when theparameter is defined without units or the value of the Expression property has an incorrect syntax.DesignXplorer system cells will remain in an Unfulfilled state until the error is fixed.

Additionally, prior to Workbench version 14.0, some DesignXplorer components could have beenincorrectly updated in spite of the error in the parameter definition. If you open a project createdwith an earlier release version and find that components that were previously in an Up to Date

state are now in an Unfulfilled state, verify that the derived parameters in the project are correctlydefined.

To verify derived parameter definition, right-click on the Parameter Set bar and select Edit to openthe Parameter Set tab. In the Outline of All Parameters view, derived parameters that are incor-rectly defined will have a red cell containing an error message in the Value column. Below, in theProperties of Outline view, the same error message will display in the Expression property cell.

Based on the information in the error message, correct derived parameter definitions as follows:

1. Select the parameter in the Outline of All Parameters view.

2. In the Outline of All Parameters view, enter a valid expression for the Expression property.

3. Refresh the project or the unfulfilled DesignXplorer component to apply the changes.

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.280

Troubleshooting

Page 289: Design Exploration Users Guide

Appendices

• Extended CSV File Format (p. 281)

Extended CSV File Format

ANSYS DesignXplorer exports and imports CSV files with an “extended” CSV format. While the filesprimarily conform to the usual CSV standard, several non-standard formatting conventions are alsosupported.

Overview of the usual CSV standard:

• CSV stands for “Comma-Separated Values.”

• The optional header line indicates the name of each column.

• Each line is an independent record made of fields separated by commas.

• The format is not dependent on the locale, which means that the real number 12.345 is alwayswritten as “12.345”, regardless of the regional settings of the computer.

For example:

field_name,field_name,field_name CRLFaaa,bbb,ccc CRLFzzz,yyy,xxx CRLF

ANSYS DesignXplorer supports the following extensions of the standard CSV format:

• If a line starts with a “#” character, it is considered to be a comment line instead of a header or dataline and is ignored.

• The header line is mandatory. It is the line where each parameter is identified by its ID (P1, P2, …,Pn) to describe each column. The IDs of the parameters in header line match the IDs of the parametersin the project.

• The first column is used to indicate a name for each row.

• A file can contain several blocks of data, with the beginning of each block being determined by anew header line.

File example:

# Design Points of Design of Experiments# P1 - ROADTHICKNESS ; P2 - ARCRADIUS ; P3 - MASS ; P4 – DE-FORMATIONName, P1, P2, P3, P41,2,120,9850000,0.2242,1.69097677,120,9819097.68,0.56671394

281Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 290: Design Exploration Users Guide

3,2.30902323,120,9880902.32,0.0722767734,2,101.4586062,8366688.5,0.271120983

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.282

Appendices

Page 291: Design Exploration Users Guide

Index

Symbols2D contour graph chart, 1042D Slices chart, 1073D contour graph chart, 105

AACT

external optimizer extensions, 146Adaptive Multiple-Objective Optimization, 251Adaptive Single-Objective, 241

Ccache of design point results, 197candidate points

candidate points results, 173display, 173properties, 174

creating, 160custom, 158editing, 158retrieving, 159verifying, 161viewing, 158working with, 157

constrained design spaces, 230constraints, 153contour graphs

2D, 1043D, 105

convergence criteria chart, 162multiple-objective optimization, 163single-objective optimization, 164

convergence curves chartKriging, 82Sparse Grid, 88

correlationcorrelation matrix chart, 34correlation scatter chart, 33correlation sensitivities chart, 34determination histogram chart, 35determination matrix chart, 34parameters correlation, 31

cumulative distribution function, 268

Ddatabase

raw optimization data, 198resuming, 5

decision support process, 253

Design Exploration, 27, 63, 77, 125, 181, 185options, 19overview, 27Parameters Correlation, 51reports, 210user interface, 6workflow, 13

design goalsdefining, 153

Design of Experiments, 29Six Sigma, 46

design points, 12adding, 160automatically saving to parameter set, 196cache, 197failed, 200handling failed , 203inserting, 197log files, 198preserving, 202preventing failures, 200raw optimization data, 198update location, 195update order, 195updating via RSM, 208using, 193

design points vs parameter plotobject reference, 31

design values, 191direct optimization

transferring data, 127distribution

choosing for random variables, 260data scatter values, 261measured data, 260no data, 262types, 263

DOE solution, 63domain

input parameters, 149parameter relationships, 150

Eerror messages, 279extensions

optimization, 146

Ggenetic algorithm, 246goal driven optimization

candidate points, 173charts, 162

283Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 292: Design Exploration Users Guide

convergence criteria, 162creating custom candidate points, 160creating design points, 160creating system, 126creating verification points, 160decision support process, 253defining domain, 148

defining input parameters, 149defining parameter relationships, 150

defining objectives and constraints, 153Direct, 125direct optimization, 127external optimizers, 146history, 165methods, 128multiple-objective methods, 244

(see also Pareto dominance)Adaptive Multiple-Objective Optimization, 251MOGA , 246MOGA convergence criteria, 245

propertiesAdaptive Multiple-Objective , 142Adaptive Single-Objective , 139MISQP, 137MOGA, 132NLPQL, 135screening, 130

Response Surface, 125Samples chart, 178sampling method, 232sensitivities, 175single-objective methods

Adaptive Single-Objective , 241MISQP , 240NLPQL , 233

singlle-objective methods, 233theory, 224, 229

best practices, 227principles, 225sampling , 230

tradeoff studies, 176using, 125

goal driven optimization algorithmstheory

Screening, 232goals, 153Goodness of Fit, 89, 92

advanced report, 96

Hhistogram , 267history chart, 165

Chart view, 166input parameter, 171objective/constraint, 170Outline view, 169parameter relationship, 172sparklines, 169

Iinput parameters

changes to, 185, 190continuous, 188discrete, 187manufacturable values, 188using, 186

LLocal Sensitivity Chart

example, 116local sensitivity charts

description, 112display, 113example

Local Sensitivity chart, 120Local Sensitivity chart, 115Local Sensitivity Curves chart, 118object reference, 39properties

Local Sensitivity chart, 117Local Sensitivity Curves chart, 123

Mmanufacturable values

changing, 191defining, 188in Local Sensitivity charts, 112in Response charts, 102

Meta-ModelChanging, 90Goodness of Fit, 89

meta-model typesKriging, 78neural network, 83non-parametric regression, 83Standard Response Surface - Full 2nd-Order Polyno-mial , 77

Meta-Models, 77min-max search, 98MISQP, 240MOGA, 246

Nneural network, 83

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.284

Index

Page 293: Design Exploration Users Guide

NLPQL, 233non-parametric regression, 83

Oobjectives, 153optimization

creating system, 126direct optimization, 127domain, 148

input parameters, 149parameter relationships, 150

extensions, 146downloading, 146installing, 146loading, 147project setup, 148selecting, 147

methods, 128object reference, 40objectives and constraints, 153properties

Adaptive Multiple-Objective , 142Adaptive Single-Objective , 139MISQP, 137MOGA, 132NLPQL, 135screening, 130

using, 125optimization algorithms

theoryAdaptive Multiple-Objective Optimization, 251Adaptive Single-Objective, 241MISQP, 240MOGA, 246NLPQL, 233

options, 19output parameters

changes to, 185using, 193

Pparameter relationships, 150

sampling, 230parameters, 10, 185

changes to, 185, 190design variables, 191input, 186

continuous, 188discrete, 187manufacturable values, 188

output, 193parameter relationships, 150

uncertainty variables, 191parameters correlation, 31Parameters Correlation, 51parameters parallel plot

object reference, 30Pareto dominance, 244Pending state, 208points

candidates, 157custom candidates, 158

postprocessing six sigma analysis results , 267cumulative distribution function, 268histogram, 267probability table, 269sensitivities, 270

Predicted vs. Observed chart, 95probability table, 269project report, 210

Rrefinement, 89

auto-refinementKriging, 79Sparse Grid, 87

Kriging, 78manual, 92, 98

refinement pointsadding, 160

Remote Solve Manager, 208reports, 210Response chart

2D contour graph, 1042D Slices graph, 1073D contour graph, 105display, 102Goodness of Fit, 92object reference, 38

Response Chartexample, 110properties, 111

response point, 37response points, 13

adding, 160response surface

Krigingauto-refinement properties, 81auto-refinement setup, 79convergence curves chart, 82

meta-model refinement, 89object reference, 35Sparse Grid

Sparse Grid convergence curves chart, 88

285Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 294: Design Exploration Users Guide

Sparse Grid properties, 87Response Surface Algorithms

Box-Behnken Design, 215Central Composite Design, 214Kriging, 217Non-Parametric Regression, 218Sparse Grid, 221Standard Response Surface — Full 2nd-Order Poly-nomial, 216

response surface charts, 99Goodness of Fit, 92local sensitivities, 112

display, 113Local Sensitivity chart, 115Local Sensitivity Curves chart, 118

Predicted vs. Observed chart, 95Response chart, 101

2D contour graph, 1042D Slices graph, 1073D contour graph, 105

Spider charts, 112response surface types

Kriging, 78neural network, 83non-parametric regression, 83Sparse Grid, 84Standard Response Surface - Full 2nd-Order Polyno-mial , 77

responsesmin-max search, 98overview, 77

Ssample generation

Adaptive Multiple-Objective , 142Adaptive Single-Objective , 139MISQP, 137MOGA, 132NLPQL, 135screening, 130, 232six sigma analysis, 267

samplesSamples chart

properties, 178Samples chart

object reference, 45Screening, 232sensitivities chart

description, 204global, 45, 49, 182goal driven optimization, 45, 175local, 112

Local Sensitivity chart, 115Local Sensitivity Curves chart, 118

object reference, 45Six Sigma, 49, 182six sigma analysis, 270

Six Sigma analysis, 48, 181object reference, 46postprocessing, 181statistical measures, 182using, 181

six sigma analysis, 257choosing variables, 260guidelines, 260postprocessing, 267sample generation, 267sensitivities, 270theory, 271understanding, 258

Six Sigma analysis postprocessing, 181charts, 182sensitivities, 182tables, 181

solutionDOE, 63

sparklines, 169Sparse Grid, 84Spider chart

object reference, 39Using, 112

TTables, 205

copy and paste, 207editable output parameters, 206exporting data, 207importing data from CSV, 207

Tradeoff chartobject reference, 44

tradeoff studies, 176troubleshooting, 279

Uuncertainty variables , 191

choosing, 260choosing distribution, 260response surface analyses, 260

updatedesign point update location, 195design point update order, 195

user interface, 6

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.286

Index

Page 295: Design Exploration Users Guide

VVerification Points, 96Verify

candidate points, 161

Wworkflow, 13

287Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

of ANSYS, Inc. and its subsidiaries and affiliates.

Page 296: Design Exploration Users Guide

Release 15.0 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.288