7
This article presents an innovative research pro- ject (sponsored by the National Science Foundation, the American Iron and Steel Institute, and the American Institute of Steel Construction) where computationally elegant algorithms based on the integration of a novel connectionist computing model, mathematical optimization, and a massively parallel computer architecture are used to automate the complex process of engineering design. D esign automation of one-of-a-kind en- gineering systems is considered a par- ticularly challenging problem (Adeli 1994). Adeli and his associates have been working on creating novel design theories and computational models with two broad objectives: (1) automation and (2) optimiza- tion (Adeli and Hung 1995; Adeli and Kamal 1993; Adeli and Zhang 1993; Adeli and Yeh 1989; Adeli and Balasubramanyam 1988a, 1998b; Paek and Adeli 1988; Adeli and Alri- jleh 1987). Civil-engineering structures are typically one of a kind as opposed to manu- facturing designs that are often mass pro- duced. To create computational models for structural design automation, we have been exploring new computing paradigms. Two such paradigms are neurocomputing and par- allel processing. We first created a neural dynamics model for optimal design of structures by integrating penalty function method, Lyapunov stability theorem, Kuhn-Tucker conditions, and the neural dynamics concept (Adeli and Park 1995a). Neural dynamics is defined by a sys- tem of first-order differential equations gov- erning time-evolution changes in node (neu- ron) activations. A pseudoobjective function in the form of a Lyapunov energy functional is defined using the exterior penalty function method. The Lyapunov stability theorem guar- antees that solutions of the corresponding dy- namic system (trajectories) for arbitrarily giv- en starting points approach an equilibrium point without increasing the value of the ob- jective function. In other words, the new neu- ral dynamics model for structural optimiza- tion problems guarantees global convergence and robustness. However, the model does not guarantee that the equilibrium point is a local minimum. We use the Kuhn-Tucker condition to verify that the equilibrium point satisfies the necessary conditions for a local mini- mum. In other words, a learning rule is devel- oped by integrating the Kuhn-Tucker neces- sary condition for a local minimum with the formulated Lyapunov function. The neural dynamics model was first ap- plied to a linear optimization problem, the Articles FALL 1996 87 Fully Automated Design of Super–High-Rise Building Structures by a Hybrid AI Model on a Massively Parallel Machine Hojjat Adeli and H. S. Park Copyright © 1996, American Association for Artificial Intelligence. All rights reserved. 0738-4602-1996 / $2.00 AI Magazine Volume 17 Number 3 (1996) (© AAAI)

Fully Automated Design of Super-High-Rise Building

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

■ This article presents an innovative research pro-ject (sponsored by the National Science Foundation,the American Iron and Steel Institute, and theAmerican Institute of Steel Construction) wherecomputationally elegant algorithms based on theintegration of a novel connectionist computingmodel, mathematical optimization, and a massivelyparallel computer architecture are used to automatethe complex process of engineering design.

Design automation of one-of-a-kind en-gineering systems is considered a par-ticularly challenging problem (Adeli

1994). Adeli and his associates have beenworking on creating novel design theoriesand computational models with two broadobjectives: (1) automation and (2) optimiza-tion (Adeli and Hung 1995; Adeli and Kamal1993; Adeli and Zhang 1993; Adeli and Yeh1989; Adeli and Balasubramanyam 1988a,1998b; Paek and Adeli 1988; Adeli and Alri-jleh 1987). Civil-engineering structures aretypically one of a kind as opposed to manu-facturing designs that are often mass pro-duced. To create computational models forstructural design automation, we have beenexploring new computing paradigms. Twosuch paradigms are neurocomputing and par-allel processing.

We first created a neural dynamics modelfor optimal design of structures by integratingpenalty function method, Lyapunov stabilitytheorem, Kuhn-Tucker conditions, and theneural dynamics concept (Adeli and Park1995a). Neural dynamics is defined by a sys-tem of first-order differential equations gov-erning time-evolution changes in node (neu-ron) activations. A pseudoobjective functionin the form of a Lyapunov energy functionalis defined using the exterior penalty functionmethod. The Lyapunov stability theorem guar-antees that solutions of the corresponding dy-namic system (trajectories) for arbitrarily giv-en starting points approach an equilibriumpoint without increasing the value of the ob-jective function. In other words, the new neu-ral dynamics model for structural optimiza-tion problems guarantees global convergenceand robustness. However, the model does notguarantee that the equilibrium point is a localminimum. We use the Kuhn-Tucker conditionto verify that the equilibrium point satisfiesthe necessary conditions for a local mini-mum. In other words, a learning rule is devel-oped by integrating the Kuhn-Tucker neces-sary condition for a local minimum with theformulated Lyapunov function.

The neural dynamics model was first ap-plied to a linear optimization problem, the

Articles

FALL 1996 87

Fully Automated Design ofSuper–High-Rise BuildingStructures by a Hybrid AI

Model on a Massively Parallel Machine

Hojjat Adeli and H. S. Park

Copyright © 1996, American Association for Artificial Intelligence. All rights reserved. 0738-4602-1996 / $2.00

AI Magazine Volume 17 Number 3 (1996) (© AAAI)

the links connecting the constraint layer tothe variable layer) and magnitudes of con-straint violations as input and generate theimproved design solutions as output.

The number of nodes in the constraintlayer is equal to the total number of con-straints imposed on the structure. There areas many constraint layers as there are load-ing combinations acting on the structure.The number of stress constraints is in thethousands for a large structure with thou-sands of members. As such, the number ofviolated stress constraints requiring compu-tation of sensitivity coefficients tends to bevery large. To accelerate the optimizationprocess and reduce the required central pro-cessing unit time for computation of sensi-tivity coefficients, only the most violatedconstraint in each group of members linkedtogether as one design variable is allowed torepresent the status of the constraints for thegroup. Therefore, a competition is intro-duced into the constraint layer to select themost critical node among the nodes belong-ing to one linked design variable.

Both excitatory and inhibitory connectiontypes are used to adjust the states of thenodes. Gradient information of the objectivefunction is assigned to the inhibitory recur-rent connections in the variable layer. Gradi-ent information of the constraint functions isassigned to the inhibitory connections fromthe constraint layer to the variable layer.

The counterpropagation network is a com-bination of supervised and unsupervisedmapping neural networks (Hecht-Nielsen1987a, 1987b). The counterpropagation partof the model consists of two layers: (1) com-petition and (2) interpolation. Nodes in thecompetition layer receive the values of im-proved design solutions from the nodes inthe variable layer, calculate the Euclidean dis-tances between the input and the connectionweights, and select the winning node. Nodesin the interpolation layer recall the corre-sponding cross-sectional properties encodedin the connection weights associated with thewinning node.

Optimization of large structures with thou-sands of members subjected to actual con-straints of commonly used codes requires aninordinate amount of computer processingtime and can be done only on multiprocessorsupercomputers (Adeli 1992a, 1992b). A highdegree of parallelism can be exploited in aneural computing model (Adeli and Hung1995). Consequently, we created a data-paral-lel neural dynamics model for discrete opti-mization of large steel structures and imple-

minimum-weight plastic design of low-riseplanar steel frames (Park and Adeli 1995). Inthis application, nonlinear code-specified con-straints were not used. It was shown that theneural dynamics model yields stable resultsno matter how the starting point is selected.

Next, encouraged by the robustness andglobal convergence of the model, we devel-oped a nonlinear neural dynamics model foroptimization of large space structures (Adeliand Park 1995b). The model consists of twoinformation flow-control components andtwo information server components. Theschematic functional interaction of variouscomponents is shown in figure 1.

The first information flow control compo-nent is a neural dynamics system of differen-tial equations that corresponds to a learningrule governing time-evolution changes innode activations. The second component isthe network topology with one variable layerand as many constraint layers as the numberof loading conditions. The first informationserver component performs finite-elementanalysis and finds the magnitudes of con-straint violations. The other informationserver finds the design sensitivity coefficients.The nonlinear neural dynamics model wasapplied to minimum weight design of fourexample structures, the largest being a 1310-member 37-story space truss structure. It wasconcluded that the new approach results in ahighly robust algorithm for optimization oflarge structures (Adeli and Park 1995c).

To achieve automated optimum design ofrealistic structures subjected to actual designconstraints of commonly used design codes(such as the American Institute of Steel Con-struction [AISC] allowable stress design [ASD]and load and resistance factor design [LRFD]specifications [AISC 1994, 1989), we devel-oped a hybrid counterpropagation neural(CPN) network–neural dynamics model fordiscrete optimization of structures consistingof commercially available sections such as thewide-flange (W) shapes used in steel struc-tures. The topology of the hybrid neural net-work model is shown in figure 2 (Adeli andPark 1996). The number of nodes in the vari-able layer corresponds to the number of inde-pendent design variables (K) in the structuraloptimization problem. Nodes in the con-straint layer receive the discrete cross-section-al properties from the CPN as input, evaluatethe prescribed constraints, and generate themagnitudes of constraint violations as output.The functional activations at the nodes in thevariable layer receive information about thesearch direction (encoded as the weights of

Articles

88 AI MAGAZINE

Optimizationof large

structureswith

thousands ofmembers

subjected toactual

constraints of commonly

used codes requires an in-

ordinateamount ofcomputer

processingtime and

can be doneonly on

multiprocessor super-

computers

Articles

FALL 1996 89

Figure 1. Functional Interactions of Various Components of the Neural Dynamics Model for Optimization of Structures.

work; (2) generating the element stiffness ma-trixes in the local coordinates, transformingthem to the global coordinates, and solvingthe resulting simultaneous linear equationsusing the preconditioned conjugate gradient(PCG) method; (3) evaluating the constraintsbased on the AISC ASD or LRFDspecifications; and (4) computing the im-

mented the algorithm on a distributed mem-ory multiprocessor, the massively parallelConnection Machine CM-5 system (Park andAdeli 1996).

There are four stages in the hybrid neuraldynamics model: (1) mapping the continuousdesign variables to commercially availablediscrete sections using a trained CPN net-

Articles

90 AI MAGAZINE

Figure 2. The Topology of the Hybrid Neural Network Model.

proved design variables from the nonlinearneural dynamics model.

The design variables are the cross-sectionalareas of the members. For integrated designof steel structures, a database of cross-sectionproperties is needed for computation of ele-ment stiffness matrixes and evaluation of theAISC ASD and LRFD constraints. A counter-propagation network consisting of competi-tion and interpolation layers is used to learnthe relationship between the cross-sectionalarea of a standard wide-flange (W) shape andother properties such as its radii of gyration(Adeli and Park 1995b).

The recalling process in the CPN network isdone in two steps: In the first step, for eachdesign variable, a competition is createdamong the nodes in the competition layer forselection of the winning node. The weights ofthe links between the variable and competi-tion layers represent the set of cross-sectionalareas of the available standard shapes. In thesecond step, discrete cross-sectional proper-ties encoded in the form of weights of linksbetween the competition and the interpola-tion layers are recalled. The weights of thelinks connecting the winning nodes to thenodes in the interpolation layer are the cross-sectional properties corresponding to an im-proved design variable.

In the second stage of the neural dynamicsmodel for optimal design of structures, sets oflinear simultaneous equations need to besolved to find the nodal displacements. Itera-tive methods are deemed more appropriatefor distributed memory computers where thesize of the memory is limited, for example, to8 megabytes (MB) in the case of the CM-5 sys-tem used in this research. As a result, a data-parallel PCG method is developed in this re-search (Adeli and Kumar 1995).

The third stage consists of constraint evalu-ation using the nodal displacements andmember stresses obtained in the previousstage. Three types of constraint are consid-ered: (1) fabrication, (2) displacement, and (3)stress (including buckling). For the LRFDcode, the primary stress constraint for a gen-eral beam-column member is a highly non-linear and implicit function of design vari-ables.

In the final stage, the nonlinear neural dy-namics model acts as an optimizer to produceimproved design variables from initial designvariables. It consists of a variable layer and anumber of constraint layers equal to thenumber of different loading conditions.

In our data-parallel neural dynamics model(Park and Adeli 1996), we exploit parallelism

in the four stages of the neural dynamicsmodel. The model has been implemented ona CM-5 system. The main components of theCM-5 system are a number of processingnodes (PNs); partition managers (PMs); andtwo high-speed, high-bandwidth communi-cation networks called data and control net-works. A PN has four vector units (VUs) with8 MB of memory to a unit and can performhigh-speed vector arithmetic computationswith a theoretical peak performance of 128

Articles

FALL 1996 91

Figure 3. A Very Large Structure Designed Automatically by the Neural Dynamics Algorithm.

rise building structure with a height of 526.7meters (1728 feet). This structure is amodified tube-in-tube system consisting of aspace moment-resisting frame with cross-bracings on the exterior of the structure. Thestructure has 8,463 joints; 20,096 members;and an aspect ratio of 7.2. Based on symme-try and practical considerations, the membersof the structure are divided into 568 groups(figure 4). Complete details of the structurecan be found in a forthcoming article (Parkand Adeli 1996).

The neural dynamics model yields mini-mum weights of 682.2 megaNewtons (MN)(153381.4 kilopounds [kips]) and 669.3 MN(150467.2 kips) after 20 iterations using theASD and LRFD codes, respectively. Theseweights translate into 1.57 kiloPascals (kPa)(34.43 pounds per square foot [psf]) and 1.54kPa (33.78 psf) for ASD and LRFD codes, re-spectively. It can be noted that the amount ofsteel used in what is currently the tallestbuilding structure in the world—the 109-sto-ry, 445-meter-high Sears building in Chica-go—is about 33 psf (Adeli 1988).

An attractive characteristic of the neuraldynamics model is its robustness and stabili-ty. We have studied the convergence historiesusing various starting points. We noted thatthe model is insensitive to the selection ofthe initial design. This insensitivity is special-ly noteworthy because we applied the modelto optimization of large space-frame struc-tures subjected to actual design constraints ofthe AISC ASD and LRFD codes. In particular,the constraints of the LRFD code are compli-cated and highly nonlinear and implicit func-tions of design variables. Further, the LRFDcode requires the consideration of the sec-ond-order Pd and PD effects.

The largest structural optimization problemever solved and reported in the literature is a100-story high-rise structure with 6136 mem-bers and variables (no design linking strategywas used) (Soegiarso and Adeli 1994). Thestructure was a space-truss structure and notsubjected to any code-specified constraints.The example presented in this article is aspace-frame structure with complicated code-specified design constraints and is by far thelargest structure optimized according to theAISC ASD and LRFD codes ever reported inthe literature. This research demonstrateshow a new level is achieved in design au-tomation of one-of-a-kind engineering sys-tems through the ingenious use and integra-tion of a novel computational paradigm,mathematical optimization, and new high-performance computer architecture.

million floating-point operations per second(MFLOPS).

The neural dynamics algorithm developedin this research can be applied to steel struc-tures of arbitrary size and configuration. Fig-ure 3 presents an example of a very largestructure designed automatically by the neu-ral dynamics algorithm on a CM-5 system.This example is a 144-story steel super–high-

Articles

92 AI MAGAZINE

Figure 4. Plan of the Building in Figure 3.

AcknowledgmentsThis article is a brief summary of five originalresearch papers by the authors. This researchis sponsored by the National Science Founda-tion under grant MSS-9222114, the AmericanIron and Steel Institute, and the American In-stitute of Steel Construction. Computing timeon the CM-5 was provided by the NationalCenter for Supercomputing Applications atthe University of Illinois at Urbana-Cham-paign.

ReferencesAdeli, H., ed. 1994. Advances in Design Optimization.London: Chapman and Hall.

Adeli, H., ed. 1992a. Parallel Processing in Computa-tional Mechanics. New York: Marcel-Dekker.

Adeli, H., ed. 1992b. Supercomputing in EngineeringAnalysis. New York: Marcel-Dekker.

Adeli, H. 1988. Interactive Microcomputer-Aided Struc-tural Steel Design. Englewood Cliffs, N.J.: Prentice-Hall.

Adeli, H., and Alrijleh, M. M. 1987. Roof Expert. PCAI 1(2): 30–34.

Adeli, H., and Balasubramanyam, K. V. 1988a. ANovel Approach to Expert Systems for Design ofLarge Structures. AI Magazine 9(4): 54–63.

Adeli, H., and Balasubramanyam, K. V. 1988b. Ex-pert Systems for Structural Design—A New Generation.Englewood Cliffs, N.J.: Prentice-Hall.

Adeli, H., and Hung, S. L. 1995. Machine Learn-ing—Neural Networks, Genetic Algorithms, and FuzzySystems. New York: Wiley.

Adeli, H., and Kamal, O. 1993. Parallel Processing inStructural Engineering. New York: Elsevier.

Adeli, H., and Kumar, S. 1995. Concurrent Struc-tural Optimization on a Massively Parallel Super-computer. Journal of Structural Engineering 121(11):1588–1597.

Adeli, H., and Park, H. S. 1996. Hybrid CPN-NeuralDynamics Model for Discrete Optimization of SteelStructures. Microcomputers in Civil Engineering 11(5):355–366.

Adeli, H., and Park, H. S. 1995a. A Neural DynamicsModel for Structural Optimization—Theory. Com-puters and Structures 57(3): 383–390.

Adeli, H., and Park, H. S. 1995b. Counterpropaga-tion Neural Networks in Structural Engineering.Journal of Structural Engineering 121(8): 1205–1212.

Adeli, H., and Park, H. S. 1995c. Optimization ofSpace Structures by Neural Dynamics. Neural Net-works 8(5): 769–781.

Adeli, H., and Yeh, C. 1989. Perceptron Learning inEngineering Design. Microcomputers in Civil Engi-neering 4(4): 247–256.

Adeli, H., and Zhang, J. 1993. An Improved Percep-tron Learning Algorithm. Neural, Parallel, and Scien-tific Computations 1(2): 141–152.

AISC. 1994. Manual of Steel Construction, Load, and

Resistance Factor Design, Volume 1: Structural Mem-bers, Specifications, and Codes. Chicago: AmericanInstitute of Steel Construction.

AISC. 1989. Manual of Steel Construction, AllowableStress Design. Chicago: American Institute of SteelConstruction.

Hecht-Nielsen, R. 1987a. Counterpropagation Net-works. In Proceedings of the IEEE First Internation-al Conference on Neural Networks, Volume 2,19–32. Washington, D.C.: IEEE Computer Society.

Hecht-Nielsen, R. 1987b. CounterpropagationNetworks. Applied Optics 26(23): 4979–4985.

Paek, Y., and Adeli, H. 1988. STEELEX: A Coupled Ex-pert System for Integrated Design of Steel Struc-tures. Engineering Applications of Artificial Intelligence1(3): 170–180.

Park, H. S., and Adeli, H. 1996. Data Parallel NeuralDynamics Model for Integrated Design of SteelStructures. Forthcoming.

Park, H. S., and Adeli, H. 1995. A Neural DynamicsModel for Structural Optimization—Application toPlastic Design of Structures. Computers and Struc-tures 57(3): 391–399.

Soegiarso, R., and Adeli, H. 1994. Impact of Vector-ization on Large-Scale Structural Optimization.Structural Optimization 7:117–125.

Hojjat Adeli received his Ph.D.from Stanford University in 1976.He is currently a professor of civilengineering and a member of theCenter for Cognitive Science atThe Ohio State University. A con-tributor to 44 different researchjournals, he has authored over270 research and scientific publi-

cations in various fields of computer science andengineering. His most recent book is entitled Ma-chine Learning—Neural Networks, Genetic Algorithms,and Fuzzy Systems (Wiley 1995). Adeli is also thefounder and editor in chief of the Integrated Com-puter-Aided Engineering. He has been an organizer ora member of the advisory board of more than 80national and international conferences and a con-tributor to more than 90 conferences.

H. S. Park received his Ph.D. from The Ohio StateUniversity in 1994. He is currently a senior re-searcher and software developer at Daewoo Insti-tute of Construction Technology, a research and de-velopment firm in Korea. Park is the co-author ofseven research articles in the area of neural networkcomputing.

Articles

FALL 1996 93