180
195 – 196 ISSN 1641-8581 Publishing House AKAPIT COMPUTER METHODS IN MATERIALS SCIENCE Informatyka w Technologii Materia ł ów Vol. 13, 2013, No. 2 Contents Spis treści Anthony R. Thornton, Thomas Weinhart, Vitaliy Ogarko, Stefan Luding MULTI-SCALE METHODS FOR MULTI-COMPONENT GRANULAR MATERIALS .............................. 197 Marta Serafin, Witold Cecot NUMERICAL ASPECTS OF COMPUTATIONAL HOMOGENIZATION .................................................... 213 Liang Xia, Balaji Raghavan, Piotr Breitkopf, Weihong Zhang A POD/PGD REDUCTION APPROACH FOR AN EFFICIENT PARAMETERIZATION OF DATA-DRIVEN MATERIAL MICROSTRUCTURE MODELS ..................................................................... 219 Marek Klimczak, Witold Cecot LOCAL NUMERICAL HOMOGENIZATION IN MODELING OF HETEROGENEOUS VISCO-ELASTIC MATERIALS ....................................................................................................................... 226 Balbina Wcisło, Jerzy Pamin NUMERICAL SIMULATIONS OF STRAIN LOCALIZATION FOR LARGE STRAIN DAMAGE- PLASTICITY MODEL ...................................................................................................................................... 231 Andrzej Milenin, Piotr Kustra, Dorota J. Byrska-Wójcik THE MULTI-SCALE NUMERICAL AND EXPERIMENTAL ANALYSIS OF COLD WIRE DRAWING FOR HARDLY DEFORMABLE BIOCOMPATIBLE MAGNESIUM ALLOY .............................................. 238 Piotr Gurgul, Marcin Sieniek, Maciej Paszyński, Łukasz Madej THREE-DIMENSIONAL ADAPTIVE ALGORITHM FOR CONTINUOUS APPROXIMATIONS OF MATERIAL DATA USING SPACE PROJECTION ........................................................................................ 245 Wacław Kuś, Radosław Górski PARALLEL IDENTIFICATION OF VOIDS IN A MICROSTRUCTURE USING THE BOUNDARY ELEMENT METHOD AND THE BIOINSPIRED ALGORITHM ................................................................... 251 Krzysztof Muszka, Łukasz Madej APPLICATION OF THE THREE DIMENSIONAL DIGITAL MATERIAL REPRESENTATION APPROACH TO MODEL MICROSTRUCTURE INHOMOGENEITY DURING PROCESSES INVOLVING STRAIN PATH CHANGES ....................................................................................................... 258 Ewa Majchrzak, Bohdan Mochnacki IDENTIFICATION OF INTERFACE POSITION IN TWO-LAYERED DOMAIN USING GRADIENT METHOD COUPLED WITH THE BEM .......................................................................................................... 264

CMMS_nr_2_2013

Embed Size (px)

DESCRIPTION

weg ejrwather here venue

Citation preview

Page 1: CMMS_nr_2_2013

195 – 196 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

Contents

Spis treści

Anthony R. Thornton, Thomas Weinhart, Vitaliy Ogarko, Stefan Luding MULTI-SCALE METHODS FOR MULTI-COMPONENT GRANULAR MATERIALS .............................. 197 Marta Serafin, Witold Cecot NUMERICAL ASPECTS OF COMPUTATIONAL HOMOGENIZATION .................................................... 213 Liang Xia, Balaji Raghavan, Piotr Breitkopf, Weihong Zhang A POD/PGD REDUCTION APPROACH FOR AN EFFICIENT PARAMETERIZATION OF DATA-DRIVEN MATERIAL MICROSTRUCTURE MODELS ..................................................................... 219 Marek Klimczak, Witold Cecot LOCAL NUMERICAL HOMOGENIZATION IN MODELING OF HETEROGENEOUS VISCO-ELASTIC MATERIALS ....................................................................................................................... 226 Balbina Wcisło, Jerzy Pamin NUMERICAL SIMULATIONS OF STRAIN LOCALIZATION FOR LARGE STRAIN DAMAGE-PLASTICITY MODEL ...................................................................................................................................... 231 Andrzej Milenin, Piotr Kustra, Dorota J. Byrska-Wójcik THE MULTI-SCALE NUMERICAL AND EXPERIMENTAL ANALYSIS OF COLD WIRE DRAWING FOR HARDLY DEFORMABLE BIOCOMPATIBLE MAGNESIUM ALLOY .............................................. 238 Piotr Gurgul, Marcin Sieniek, Maciej Paszyński, Łukasz Madej THREE-DIMENSIONAL ADAPTIVE ALGORITHM FOR CONTINUOUS APPROXIMATIONS OF MATERIAL DATA USING SPACE PROJECTION ........................................................................................ 245 Wacław Kuś, Radosław Górski PARALLEL IDENTIFICATION OF VOIDS IN A MICROSTRUCTURE USING THE BOUNDARY ELEMENT METHOD AND THE BIOINSPIRED ALGORITHM ................................................................... 251 Krzysztof Muszka, Łukasz Madej APPLICATION OF THE THREE DIMENSIONAL DIGITAL MATERIAL REPRESENTATION APPROACH TO MODEL MICROSTRUCTURE INHOMOGENEITY DURING PROCESSES INVOLVING STRAIN PATH CHANGES ....................................................................................................... 258 Ewa Majchrzak, Bohdan Mochnacki IDENTIFICATION OF INTERFACE POSITION IN TWO-LAYERED DOMAIN USING GRADIENT METHOD COUPLED WITH THE BEM .......................................................................................................... 264

Page 2: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 196 –

Agnieszka Cebo-Rudnicka, Zbigniew Malinowski, Beata Hadała, Tadeusz Telejko INFLUENCE OF THE SAMPLE GEOMETRY ON THE INVERSE DETERMINATION OF THE HEAT TRANSFER COEFFICIENT DISTRIBUTION ON THE AXIALLY SYMMETRICAL SAMPLE COOLED BY THE WATER SPRAY ............................................................................................... 269 Michael Petrov, Pavel Petrov, Juergen Bast, Anatoly Sheypak INVESTIGATION OF THE HEAT TRANSPORT DURING THE HOLLOW SPHERES PRODUCTION FROM THE TIN MELT ..................................................................................................................................... 276 Sławomir Świłło AN EXPERIMENTAL STUDY OF MATERIAL FLOW AND SURFACE QUALITY USING IMAGE PROCESSING IN THE HYDRAULIC BULGE TEST ..................................................................................... 283 Szymon Lechwar SELECTION OF SIGNIFICANT VISUAL FEATURES FOR CLASSIFICATION OF SCALES USING BOOSTING TREES MODEL ............................................................................................................................ 289 Stanisława Kluska-Nawarecka, Zenon Pirowski, Zora Jančíková, Milan Vrožina, Jiří David, Krzysztof Regulski, Dorota Wilk-Kołodziejczyk A USER-INSPIRED KNOWLEDGE SYSTEM FOR THE NEEDS OF METAL PROCESSING INDUSTRY ........................................................................................................................................................ 295 Stanisława Kluska-Nawarecka, Edward Nawarecki, Grzegorz Dobrowolski, Arkadiusz Haratym, Krzysztof Regulski THE PLATFORM FOR SEMANTIC INTEGRATION AND SHARING TECHNOLOGICAL KNOWLEDGE ON METAL PROCESSING AND CASTING ........................................................................ 304 Jan Kusiak, Gabriel Rojek, Łukasz Sztangret, Piotr Jarosz INDUSTRIAL PROCESS CONTROL WITH CASE-BASED REASONING APPROACH ........................... 313 Krzysztof Regulski, Danuta Szeliga, Jacek Rońda, Andrzej Kuźniar, Rafał Puc RULE-BASED SIMPLIFIED PROCEDURE FOR MODELING OF STRESS RELAXATION ..................... 320 Sławomir Świłło EXPERIMENTAL APPARATUS FOR SHEET METAL HEMMING ANALYSIS ....................................... 326 Sławomir Świłło, Piotr Czyżewski AN EXPERIMENTAL AND NUMERICAL STUDY OF MATERIAL DEFORMATION OF A BLANKING PROCESS ................................................................................................................................. 333 Piotr Lacki, Janina Adamus, Wojciech Więckowski, Julita Winowiecka MODELLING OF STAMPING PROCESS OF TITANIUM TAILOR-WELDED BLANKS ......................... 339 Andrzej Woźniakowski, Józef Deniszczyk, Omar Adjaoud, Benjamin P. Burton FIRST PRINCIPLES PHASE DIAGRAM CALCULATIONS FOR THE CdSe-CdS WURTZITE, ZINCBLENDE AND ROCK SALT STRUCTURES ........................................................................................ 345 Andrzej Woźniakowski, Józef Deniszczyk PHASE DIAGRAM CALCULATIONS FOR THE ZnSe – BESE SYSTEM BY FIRST-PRINCIPLES BASED THERMODYNAMIC MONTE CARLO INTEGRATION ................................................................. 351 Michal Gzyl, Andrzej Rosochowski, Andrzej Milenin, Lech Olejnik MODELLING MICROSTRUCTURE EVOLUTION DURING EQUAL CHANNEL ANGULAR PRESSING OF MAGNESIUM ALLOYS USING CELLULAR AUTOMATA FINITE ELEMENT METHOD ................. 357 Bartek Wierzba THE MIGRATION OF KIRKENDALL PLANE DURING DIFFUSION ........................................................ 364 Onur Güvenc, Thomas Henke, Gottfried Laschet, Bernd Böttger, Markus Apel, Markus Bambach, Gerhard Hirt MODELING OF STATIC RECRYSTALLIZATION KINETICS BY COUPLING CRYSTAL PLASTICITY FEM AND MULTIPHASE FIELD CALCULATIONS ............................................................. 368

Page 3: CMMS_nr_2_2013

197 – 212 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

MULTI-SCALE METHODS FOR MULTI-COMPONENT GRANULAR MATERIALS

ANTHONY R. THORNTON1,2, THOMAS WEINHART1, VITALIY OGARKO1, STEFAN LUDING1

1 Multi-Scale Mechanics, Department of Mechanical Engineering, University of Twente, 7500 AE Enschede, The Netherlands

2 Mathematics of Computational Science, Department of Applied Mathematics, University of Twente, 7500 AE Enschede, The Netherlands

*Corresponding author: [email protected]

Abstract

In this paper we review recent progress made to understand granular chutes flow using multi-scale modeling tech-niques. We introduce the discrete particle method (DPM) and explain how to construct continuum fields from discrete da-ta in a way that is consistent with the macroscopic concept of mass and momentum conservation. We present a novel ad-vanced contact detection method that is able of dealing with multiple distinct granular components with sizes ranging over orders of magnitude. We discuss how such advanced DPM simulations can be used to obtain closure relations for continuum frameworks (the mapping between the micro-scale and macro-scale variables and functions): the micro-macro transition. This enables the development of continuum models that contain information about the micro-structure of the granular materials without the need for a priori assumptions.

The micro-macro transition will be illustrated with two granular chute/avalanche flow problems. The first is a shal-low granular chute flow where the main unknown in the continuum models is the macro-friction coefficient at the base. We investigate how this depends on both the properties of the flow particles and the surface over which the flow is tak-ing place. The second problem is that of gravity-driven segregation in poly-dispersed granular chute flows. In both these problems we consider small steady-state periodic box DPM simulations to obtain the closure relations.

Finally, we discuss the issue of the validity of such closure-relations for complex dynamic problems, that are a long way from the simple period box situation from which they were obtained. For simple situations the pre-computed closure relations will hold. In more complicated situations new strategies are required were macro-continuum and discrete micro-models are coupled with dynamic, two-way feedback between them.

Key words: coupled multiscale model, multi-component granular materials, Navier-Stokes equation, discrete particle simulations

1. INTRODUCTION

Granular materials are everywhere in nature and many industrial processes use materials in granular form, as they are easy to produce, process, transport and store. Many natural flows are comprised of granular materials and common examples include rock slides than can contain many cubic kilometers of material. Granular materials are, after water, the second most widely manipulated substance on the

planet (de Gennes, 2008); however, the field is con-siderable behind the field of fluids and currently no unified continuum description exists, i.e. there is no granular Navier-Stokes style constitutive equations. However, simplified descriptions do exist for certain limiting scenarios: examples include rapid granular flows where kinetic theory is valid (e.g., Jenkins & Savage, 1983; Lun et al., 1984) and shallow dense flows where shallow-layer models are applicable

Page 4: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 198 –

(e.g. Savage & Hutter, 1989; Gray, 2003; Bokhove & Thornton, 2012). For the case of quasi-static ma-terials the situations is even more complicated and, here, more research on a continuum description is required.

Flows in both nature and industry show highly complex behaviour as they are influenced by many factors such as: poly-dispersity, in size and density; variations in density; nonuniform shape; complex basal topography; surface contact properties; coex-istence of static, steady and accelerating material; and, flow obstacles and constrictions.

Discrete particle methods (DPMs) are a very powerful computational tool that allows the simula-tion of individual particles with complex interac-tions, arbitrary shapes, in arbitrary geometries, by solving Newton's laws of motion for each particle (e.g. Weinhart et al., 1012). How to capture these elaborate interactions of sintering, complex (non-spherical) shape, breaking and cohesional particles, by the contact model, is an active area of research and many steps forward have recently been made. DPM is very powerful and flexibility tool; however, it is computationally very expensive. With the recent increase in computational power it is now possible to simulate flows containing a few million particles; however, for 1 mm particles this would represent a flow of approximately 1 litre which is many orders of magnitude smaller than the flows found in indus-trial and natural flows.

Continuum methods are able to simulate the vol-ume of real industrial or geophysical flows, but have to make averaging approximations reducing the properties of a huge number of particles to a handful of averaged quantities. Once these averaged parame-ters have been tuned via experimental or historical data, these models can be surprisingly accurate; but, a model tuned for one flow configuration often has no prediction power for another setup. Therefore, it is not possible in this fashion to create a unified model capable of describing a new scenario.

DPM can be used to obtain the mapping between the microscopic and macroscopic parameters allow-ing determination of the macroscopic data without the need for a priori knowledge. In simple situations, it is possible to pre-compute the relations between the particle and continuum (micro-macro transition method); but, in more complicated situations two-way coupled multi-scale modelling (CMSM) is re-quired.

For the micro-macro transition, discrete particle simulations are used to determine unknown func-tions or parameters in the continuum model as a function of microscopic particle parameters and other state variables; these mappings are referred to as closure relations (e.g. Thornton et al., 2012; 2013). For CMSM, continuum and micro-scale models are dynamically coupled with two-way feed-back between the computational models. The cou-pling is done in selective regions in space and time, thus reducing computational expense and allowing simulation of complex granular flows. For problems

Fig. 1. Illustration of the modeling philosophy for the undertaken research. Solid lines indicate the main steps of the method and dashed lines the verification steps; (a) shows the idea for the micro-macro transition and (b) for two way coupled multi-scale modelling (CMSM).

Page 5: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 199 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

that contain only small complex regions, one can use a localised hybrid approach where a particle method is applied in the complex region and is coupled through the boundary conditions to a continuum solver (Markesteijn, 2011). For large complex re-gions or globally complex problems, and iterative approach can be used, where a continuum solver is used everywhere, and small particle simulations are run each time the closure relations need to be evalu-ated, see e.g. (Weinan, 2007).

The ultimate aim of this research would be to de-termine the unknowns (material/contact properties) in the contact law from a few standard experiments on individual particles.

Our approach is illustrated in figure 1. The idea is to obtain the particle material properties from small (individual) particle experiments and use this information to determine the parameters of the con-tact model for DPM simulations. We then perform small scale periodic box particles simulations and use this data to determine unknowns in the continu-um models (i.e to close the model). It is then ex-pected that this closed continuum model is able to simulate the flow of the same particles in more com-plex and larger systems. The validity of this closed model will be investigated by comparing its results with both computationally expensive large-scale simulations and experiments. For situations were this one-way coupled micro-macro approach fails, we will use the computationally more expensive two-way coupled models to simulate the flow.

1.1. Outline

It is possible to apply CMSM or micro-macro

methods to completely general three-dimensional Cauchy mass and momentum equations and use the DPM to determine the unspecified constitutive rela-tions for the stresses; however, we will not take this approach. We will focus on scenarios where simpli-fying approximations are made which lead to con-tinuum models (still containing undetermined quan-tities) that are valid only in certain limits.

In this paper we discuss the approach we are tak-ing, review the current steps we have made and dis-cuss the future directions and open issues with this approach. Firstly, we will consider shallow granular flows (of major importance to many areas of geo-physics) and secondly, gravity-driven segregation of poly-dispersed granular material. For the second problem, the efficiency of DPM becomes an issue and a new algorithm will have to be considered.

The outline for the rest of the paper will be: §2 introduction to DPM, §3 how to construct continu-um fields from discrete particle data (how to per-form the micro-macro transition); §4 the micro-macro transition for shallow granular flows; §5 DPM simulations with wide particle distribution; §5 collision detection; §6 micro-macro transition for segregating flows; and §7 future prospects and con-clusions.

2. INTRODUCTION TO DPM

In the discrete particle method, often called the discrete element method, Newton's laws are solved for both the translational and the rotational degrees of freedom. The problem is fully determined by specifying the forces and torques between two inter-acting particles. Here, we will briefly review three commonly used contact laws for almost spherical granular particles: linear spring-dashpot, Hertzian springs and plastic models.

Each particle i has diameter di, mass mi, position ri, velocity vi and angular velocity i. Since we are assuming that particles are almost spherical and only slightly soft, the contacts can be treated as occurring at a single point. The relative distance between two particles i and j is rij = ri-rj, the branch vector (the vector from the centre of particle i to the contact point) is ˆ / 2 2n

ij i ijd d b n , where the unit

normal is ˆ /ij i ijr jn r r , and the relative velocity

is vij = vi - vj , and the overlap is:

1max 0,2

nij i j ijd d r

+

Two particles are in contact if their overlap is positive. The normal and tangential relative veloci-ties at the contact point are given by:

ˆ ˆnij ij ij ij v v n n

ˆ ˆtij ij ij ij ij i ij j ji v v v n n ω b ω b

The total force on particle i is a combination of

the normal and tangential contact forces n tij ijf f

from each particle j that is in contact with particle i, and external forces, which for this investigation will be limited to gravity, mig. Different contact models exist for the normal, n

ijf , and tangential, tijf , forc-

es. For each contact model, when the tangential-to-normal force ratio becomes larger than a contact friction coefficient, c, the tangential force yields

Page 6: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 200 –

and the particles slide, and we truncate the magni-tude of the tangential force as necessary to satisfy

t c nij ijf f . We integrate the resulting force and

torque relations in time using Velocity-Verlet and forward Euler (Allen & Tildesley, 1989) with a time step t = tc = 50, where tc is the collision time, see e.g. (Luding, 2008):

2

2

cn n

ij ij

tkm m

(1)

with the reduced mass mij = mimj/(mi + mj). For the spring-dashpot case (Cundall & Strack,

1979; Luding, 2008; Weinhart, 2012a) the normal,

( )n

ij sdf , and tangential, ( )t

ij sdf , forces are modelled

with linear elastic and linear dissipative contribu-tions, hence:

( ) ( )ˆ ,n n n n n t t t t tij sd ij ij ij ij sd ij ijk k f n v f v (2)

with spring constants kn, kt and damping coefficients n, t. The elastic tangential displacement, t

ijδ , is

defined to be zero at the initial time of contact, and its rate of change is given by:

1t t tij ij ij ij ij ij

d rdt

δ v δ v n (3)

In equation (3), the first term is the relative tangential velocity at the contact point, and the second term ensures that t

ijδ remains normal to nij, see (Weinhart, 2012a) for details. This mod-el is designed to model particles that are elastic, but dissipated with clearly defined coefficient of restitution, .

For the Hertzian case we modify the interaction force with:

/ /( ) ( )

nijn t n t

ij Hertz ij sdd

f f (4)

see e.g. (Silbert et al., 2001). This models follows from the theory of elasticity and takes account of the full non-linear elastic response.

Finally, for the plastic case (designed to capture small plastic deformation) we modify the normal force using the (hysteretic) elastic-plastic form of Walton and Braun e.g. (Luding, 2008; Walton & Braun, 1986), therefore, in the normal direction a different spring constant is taken for loading and

unloading/reloading of the contact and no dash-pot is used i.e.:

1 2 1

( ) 2 1 2

2

ˆ ifˆ if 0

0 if 0

n n n e n nij ij ij ij

n n e n n n eij p ij ij ij ij

n eij

k k kk k k

k

nf n

(5a)

( ) ( )t t t t t t

ij p ij sd ij ijk f f δ v (5b)

with max1 21 /e n n n

ij ij ij k k and, nijij maxmax is

the maximum overlap during the contact. Unlike (Luding, 2008, Walton & Braun, 1986), we take 2

nk to be constant, so that the normal coefficient of resti-

tution is given by n = 1 2/n nk k . However, for en-

during contacts the dissipation is smaller than in the spring-dashpot case, since oscillations on the un-loading/reloading ( 2

nk ) branch do not dissipate ener-gy. For a more detailed review of contact laws, in general, we refer the reader to (Luding, 2008).

3. THE MICRO-MACRO TRANSITION

For all multi-scale methods, one of the biggest challenges is how to obtain continuum fields from large amounts of discrete particle data. Here, we give a short overview, then review in more detail the approach we prefer.

There are many papers in the literature on how to go from the discrete to the continuum: binning micro-scale fields into small volumes (Irving & Kirkwood, 1950; Schoeld & Henderson, 1982; Lud-ing, 2004; Luding et al., 2001), averaging along planes (Todd et al., 1995), or coarse-graining spa-tially and temporally (Babic, 1997; Shen & Atluri, 2004; Goldhirsch, 2010). Here, we use the coarse graining approach described by Weinhart et al. (2012b) as this is still valid within one course-graining width of the boundary.

The coarse-graining method has the following advantages over other methods: (i) the fields pro-duced automatically satisfy the equations of contin-uum mechanics, even near the flow base; (ii) it is neither assumed that the particles are rigid nor spherical; and, (iii) the results are even valid for single particles as no averaging over groups of parti-cles is required. The only assumptions are that each particle pair has a single point of contact (i.e., the particle shapes are convex), the contact area can be

Page 7: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 201 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

replaced by a contact point (i.e., the particles are not too soft), and that collisions are not instantaneous.

3.1. Notation and basic ideas

Vectorial and tensorial components are denoted

by Greek letters in order to distinguish them from the Latin particle indices i; j. Bold vector notation will be used when convenient.

Assume a system given by Nf owing particles and Nb fixed basal particles with N = Nf + Nb. Since we are interested in the flow, we will calculate mac-roscopic fields pertaining to the owing particles on-ly. From statistical mechanics, the microscopic mass density of the flow, mic, at a point r at time t is de-fined by:

1

,fN

mici i

it m t

r r r (6)

where (r) is the Dirac delta function and mi is the mass of particle i. The following definition of the macroscopic density of the flow is used:

1

,fN

i ii

t m t

Wr r r (7)

thus replacing the Dirac delta function in (6) by an integrable ‘coarse-graining’ function w whose inte-gral over space is unity. We will take the coarse-graining function to be a Gaussian:

2

3 2

1 exp22

ii

tt

ww

Wr r

r r (8)

with width or variance w. Other choices of the coarse-graining function are possible, but the Gauss-ian has the advantage that it produces smooth fields and the required integrals can be analysed exactly. According to Goldhirsch (2010), the coarse-grained fields depend only weakly on the choice of function, and the width w is the key parameter.

It is clear that as w → 0 the macroscopic density defined in (8) reduces to the one in (7). The coarse-graining function can also be seen as a convolution integral between the micro and macro definitions, i.e.:

3

, ', 'micit t t d r r r r r=

R

W (9)

3.2. Mass balance Next, we will consider how to obtain the other

fields of interest: the momentum density vector and the stress tensor. As before, all macroscopic varia-bles will be defined in a way compatible with the continuum conservation laws.

The coarse-grained momentum density vector p(r, t) is defined by:

1

,fN

i i ii

p t m v t

r r r= W (10)

where the vi ‘s are the velocity components of par-ticle i. The macroscopic velocity field V(r,t) is then defined as the ratio of momentum and density fields,:

,,

,p t

V tt

rr

r= (11)

It is straightforward to confirm that equations (7) and (10) satisfy exactly the continuity equation:

0pt r

(12)

with the Einstein summation convention for Greek letters.

3.3. Momentum balance

Finally, we will consider the momentum conser-

vation equation with the aim of establishing the macroscopic stress field. In general, the desired momentum balance equations are written as:

p V V t gr r r

(13)

where is the stress tensor, and g is the gravita-tional acceleration vector. Since we want to describe boundary stresses as well as internal stresses, the boundary interaction force density, or surface trac-tion density, t, has been included, as described in detail in (Weinhart et al. 2012b).

Expressions (10) and (11) for the momentum p and the velocity V have already been defined. The next step is to compute their temporal and spatial derivatives, respectively, and reach closure. Then after some algebra, see (Weinhart et al., 2012b) for details, we arrive at the following expression for the stress:

Page 8: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 202 –

1

1 1 1 0

1, ,

1 1 10

f f

f f f

N N

ij ij i iki j

N N Nb

ik ik i ik i i i ii k i

f r s ds

f b s ds m v v

r r b

r r b

W

W W

(14)

Equation (14) differs from the results of Gold-hirsch.(2010) by an additional term that accounts for the stress created by the presence of the base, as detailed by Weinhart et al., (2012b). The contribu-tion to the stress from the interaction of two flow particles i; j is spatially distributed along the contact line from ri to rj, while the contribution from the interaction of particles i with a fixed particle k is distributed along the line from ri to the contact point cik = ri + bik. There, the boundary interaction force density:

1 1

f bN N

ik iki k

t f

r cW (15)

is active, measuring the force applied by the base to the flow. It has nonzero values only near the basal surface and can be introduced into continuum mod-els as a boundary condition.

The strength of this method is that the spatially coarse-grained fields by construction satisfy the mass and momentum balance equations exactly at any given time, irrespective of the choice of coarse-graining function. Further details about the accuracy of the stress definition (14) are discussed by Wein-hart et al., (2012b). The expression for the energy is also not treated in this publication, we refer the in-terested reader to (Babic, 1997).

4. SHALLOW GRANULAR FLOWS

4.1. Background Shallow-layer granular continuum models are

often used to simulate geophysical mass flows, in-cluding snow avalanches (Cui et al., 2007), dense pyroclastic flows, debris flows (Denlinger & Iver-son, 2001), block and ash flows (Dalby et al., 2008), and lahars (Williamset al., 2008). Such shallow-layer models involve approximations reducing the properties of a huge number of individual particles to a handful of averaged quantities. Originally these models were derived from the general continuum incompressible mass and momentum equations, using the long-wave approximation (Savage & Hut-ter, 1989; Gray, 2003; Bokhove & Thornton, 2012)

for shallow variations in the flow height and basal topography. Despite the massive reduction in de-grees of freedom made, shallow-layer models tend to be surprisingly accurate, and are thus an effective tool in modelling geophysical flows. Moreover, they are now used as a geological risk assessment and hazard planning tool (Dalby et al., 2008). In addition to these geological applications, shallow granular equations have been applied to analyse small-scale laboratory chute flows containing obstacles (Gray, 2003), wedges (Hakonardottir & Hogg, 2005; Gray, 2007) and contractions (Vreman, 2007), showing good quantitative agreement between theory and experiment.

We will consider flow down a slope with inclina-tion , the x-axis downslope, y-axis across the slope and the z-axis normal to the slope. In general, the free-surface and base locations are given by z = s(x,y) and z = b(x,y), respectively. Here, we will only con-sider flows over rough at surfaces where b can be taken as constant. The height of the flow is h = s - b and velocity components are u = (u,v,w)T. Depth-averaging the mass and momentum balance equa-tions and retaining only leading and first order terms (in the ratio of height to length of the flow) yields the depth-averaged shallow-granular equations, (e.g.Gray, 2003), which in one-dimension are given by:

0h hu hvt x y

(16a)

2 2 cos2 xghu hu K h S

t x

(16b)

where g is the gravitational acceleration, u the depth-averaged velocity and the source term is given by:

2 2

cos tanxuS gh

u v

Before these equations can be solved, closure re-lations need to be specified for three unknowns: the velocity shape factor, , the ratio of the two diago-nal stress components, K, and the friction, , be-tween the granular materials and the basal surface over which it flows.

These closure relations can either be postulated (from theory or phenomenologically), or determined from experiments or DPM simulations. Our philoso-phy was to determine these unknown relations using

Page 9: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 203 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

small-scale periodic box simulations DPMs similar to the ones used by Silbert et al. (2001).

Below some of the main findings are summa-rized. Initially, we consider only the springdashpot contact model and looked at the closures across dif-ferent basal surfaces (Weinhart et al., 2012a). The chute is periodic and of size 20d10d in the xy-directions, with inclination , see figure 2. In this case flow particles are monodispersed. The base was created from particles and roughness was changed by modifying the ratio of the size of base and flow particles, .

4.2. Closure for K

For the shallow layer theory presented in (16),

K, is the ratio of two stress component K = xx/zz. First the K was found to be linear in the inclination angle and independent of (for all but the smooth base case of = 0):

1

0

1fit dKd

(17)

with d0 = 132o and d1 = 21.30o.

Fig. 2. DPM simulation for Nf/200 = 17:5, inclination = 24 and the diameter ratio of free and fixed particles, = 1, at time t = 2000; gravity direction g as indicated. The domain is period-ic in x- and y-directions. In the z-direction, fixed particles (black) form a rough base while the surface is unconstrained. Colours indicate speed: increasing from blue via green to or-ange.

4.3. Closure for By far the most studies closure relation for shal-

low granular flows is the basal friction coefficient . In the early models a constant friction coefficient was assumed (Hungr & Morgenstern, 1984; Savage & Hutter, 1989), i.e. = tan , where is a fixed basal frictional angle. For these models, steady uni-

form flow is only possible at a single inclination, , below which the flow arrests, and above which the flow accelerates indefinitely. Detailed experimental investigations (GDR MiDi, 2004; Pouliquen, 1999; Pouliquen & Forterre, 2002) for the flow over rough uniform beds showed that steady flow emerges at a range of inclinations, 1 < < 2, where 1 is the minimum angle required for flow, and 2 is the maximum angle at which steady uniform flow is possible. In (Pouliquen & Forterre, 2002), the meas-ured height hstop () of stationary material left behind when a owing layer has been brought to rest, was fitted to:

21 2

1

tan tantan tan

stophAd

(18)

where d is the particle diameter and A a characteris-tic dimensionless scale over which the friction var-ies. They also observed that the Froude number F =

/ cosu gh , scaled linear with this curve:

1 2

stop

hFh

(19)

where and are two experimentally determined constants. From these relations you can show that the friction closure is given by:

2 11

tan tan, tan

1h F h

Ad F

(20)

This experimentally determined law has previ-ously been shown to hold from DPM simulations (e.g. Silbert et al., 2001). In (Thornton, 2012; Wein-hart et al., 2012a) we investigate how the parameters A, 1, 2, and change as a function of the contact friction between bed and owing particles, the parti-cle size ratio and even the type of contact law. The main conclusion were: The law (18) holds for the spring-dashpot, plas-

tic and Hertzian contact models The properties of the basal particles have very

little affect on the macroscopic friction, that is, only a weak effect on and.

The geometric roughness is more important than the contract friction in the interaction law, c.

The coefficient of restitution of the particles only affects, not the friction angles.

Page 10: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 204 –

Full details of the values of A, 1, 2, and as a function of both macro- and microscopic parameters can be found in (Thornton, 2012; Weinhart et al., 2012a).

4.4. Closure for Finally, we consider the closure the velocity

shape factor. This is the shape of the velocity profile with height and is defined as

2

1

s

bs

b

u dz

hudz

(21)

This was done in two steps: firstly, it was ob-served that the vertical flow velocity structure con-tained three parts, see figure 4 for details. These were then fitted separately and from these fits the

shape factor was computed. The values of as a function of height and angle, , is shown in figure 4.

4.5. Future directions

We have now established closure relations for

shallow-granular flows and the natural question of the range of validity of closure relations derived from this small steady periodic box simulations.

This closed continuum model has recently been implemented in an in-house discontinuous Galerkin finite element package, hpGEM (Pesch, 2007; van der Vegt). A series of test cases and currently being investigat-ed include complicated features with contractions and obstacles. The re-sults of this closed model are com-pared with computationally expen-sive full-scale DPM simulations of the same scenarios. This comparison and verification set is represented by the dashed lines in figure 1.

It is anticipated that this clo-sured continuum model will work fine for simple flow scenarios; how-ever, for complex flow containing particle free regions and static mate-

Fig. 3. Flow velocity profiles for varying height for 4000 particles and = 1. We observe a Bagnold velocity profile,

3/25 13

h zu uh

, in the bulk, a linear profile near the surface and a convex profile near the base [z < b1hstop()/h) with b1 =

9.42.

Fig. 4. Shape factor from simulations (markers), for case = 1, and fit (h,) (dotted lines). For comparison, plug flow = 1, Bagnold profile = 1:25 and lines profile = 1:5.

Page 11: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 205 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

rials it is likely to fail. For this situation, a fully two-way coupled code will have to be developed. More discussion of problems associated with the devel-opment of this code can be found in chapter 7.

5. COLLISION DETECTION

The performance of the DPM computation relies on several factors, which include both the contact model and the contact detection algorithm. The col-lision detection of short-range pair wise interactions between particles in DPM is usually one of the most time-consuming tasks in computations (Williams & O'Connor, 1999). If you were to undertake the task of collision detection in a naive fashion you would have to perform N2 checks were N is the number of particles; however, this becomes impractical even for relatively small systems.

The most commonly used method for contact de-tection of nearly mono-sized particles with short-range forces is the Linked-Cell method (Hockney & Eastwood, 1981; Allen & Tildesley, 1989). Due to its simplicity and high performance, it has been uti-lized since the beginning of particle simulations, and is easily implemented in parallel codes (Form, 1993; Stadler et al., 1997).

Nevertheless, the Linked-Cell method is unable to efficiently deal with particles of greatly varying sizes (Iwai, 1999), which will be the case in the next problem considered. This can effectively be ad-dressed by the use of a multilevel contact detection algorithm (Ogarko & Luding, 2012), which we re-view here. This advanced contact detection algo-rithm is already implemented in Mercury (Thornton et al.), the open-source code developed here, and that is used for all the simulations in this paper. An extensive review of various approaches to contact detection is given in Munjiza, 2004. The perfor-mance difference between them is studied in (Muth, 2007; Ogarko & Luding, 2012; Raschdorf & Ko-lonko, 2011).

5.1. Algorithm

The present algorithm is designed to determine

all the pairs in a set of N spherical particles in a d-dimensional Euclidean space that overlap. Every particle is characterized by the position of its centre rp and its radius ap. For differently-sized spheres, amin and amax denote the minimum and the maximum particle radius, respectively, and = amin/amax is the extreme size ratio.

The algorithm is made up of two phases. In the first ‘mapping phase’ all the particles are mapped into a hierarchical grid space (subsection 5.1.1). In the second ‘contact detection phase’ (subsection 5.1.2) for every particle in the system the potential contact partners are determined, and the geometrical intersection tests with them are made.

5.1.1. Mapping phase

The d-dimensional hierarchical grid is a set of L

regular grids with different cell sizes. Every regular grid is associated with a hierarchy level h 1, 2,…, L, where L is the integer number of hierarchy levels. Each level h has a different cell size sh R, where the cells are d-dimensional cubes. Grids are ordered with increasing cell size so that h = 1 corresponds to the grid with smallest cell size, i.e., sh < sh + 1. For a given number of levels and cell sizes, the hierar-chical grid cells are defined by the following spatial mapping, M, of points dr R to a cell at specified level h:

M : (r, h) c =

h

d

h sr

sr ,...,1 (22)

where [r] denotes the floor function (the largest in-teger nor greater than r). The first d components of a(d + 1)-dimensional vector c represent cell indices (integers), and the last one is the associated level of hierarchy. The latter is limited whereas the former are not.

It must be noted that the cell size of each level can be set independently, in contrast to contact de-tection methods which use a tree structure for parti-tioning the domain (Ericson, 1993; Raschdorf & Kolonko, 2011; Thatcher, 2000) where the cell sizes are taken as double the size of the previous lower level of hierarchy, hence sh + 1 = 2sh. The flexibility of independent sh allows one to select the optimal cell sizes, according to the particle size distribution, to improve the performance of the simulations. How to do this is explained by (Ogarko & Luding, 2012).

Using the mapping M, every particle p can be mapped to its cell:

,p pc M r h p (23)

where h(p) is the level of insertion to which particle p is mapped to. The level of insertion h(p) is the lowest level where the cell is big enough to contain the particle p:

Page 12: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 206 –

1min : 2p h ph L

h p c h s r

(24)

In this way the diameter of particle p is smaller or equal to the cell size in the level of insertion and therefore the classical Linked-Cell method (Allen & Tildesley, 1989) can be used to detect the contacts among particles within the same level of hierarchy.

Figure 5 illustrates a 2-dimensional two-level grid for the special case of a bi-disperse system with amin = 3/2, size ratio = 8/3, and cell sizes s1 = 3, and s2 = 8. Since the system contains particles of only two different sizes, two hierarchy levels are sufficient here.

Fig. 5. A 2-dimensional two-level grid for the special case of a bi-disperse system with cell sizes s1 = 2amin = 3, and s2 = 2amax = 8 (a.u.). The first level grid is plotted with dashed lines while the second level is plotted with solid lines. The radius of the particle B is aB = 4 (a.u.) and its position is rB = (10.3; 14.4). Therefore, according to equations (23) and (24), particle B is mapped to the second level to the cell cB = (1, 1, 2). Corre-spondingly, particle A is mapped to the cell cA = (4, 2, 1). The cells where the cross-level search for particle B has to be per-formed from (1,3,1) to (5,6,1) are marked in grey, and the small particles which are located in those cells are dark (green). Note, that in the method of Iwai et al. (1999) the search region starts at cell (1, 2, 1), i.e., one more layer of cells (which also includes particle A).

5.1.2. Contact detection phase The contact detection is split into two steps, and

the search is done by looping over all particles p and performing the first and second steps consecutively for each p. The first step is the contact search at the level of insertion of p, h(p), using the classical Linked-Cell method (Allen & Tildesley, 1989). The search is done in the cell where p is mapped to, i.e., cp, and in its neighbour (surrounding) cells. Only

half of the surrounding cells are searched, to avoid testing the same particle pair twice.

The second step is the cross-level search. For a given particle p, one searches for potential contacts only at levels h lower than the level of insertion: 1 h < h(p). This implies that the particle p will be checked only against the smaller ones, thus avoiding double checks for the same pair of particles. The cross-level search for particle p (located at h(p)) with level h is detailed here: 1. Define the cells cstart and cend at level h as

: , , and : ,start endc cc M h c M h r r (25)

where a search box (cube in 3D) is defined by cr

= rp 1

dii

e , with = ap + 0.5sh and ei is the

standard basis forRd. Any particle q from level h, i.e., h(q) = h, with centre xq outside this box can not be in contact with p, since the diameter of the largest particle at this level can not exceed sh.

2. The search for potential contacts is performed in every cell c = (c1, … , cd; h) for which

and (26) where ci denotes the i-th component of vector c. In other words, each particle which was mapped to one of these neighbour cells is tested for contact with particle p. In figure 5, the level h = 1 cells where that search has to be performed (for particle B) are marked in grey.

To test two particles for contacts, first, the axis-aligned bounding boxes (AABB) of the particles (Moore & Wilhelms, 1988) are tested for overlap. Then, for every particle pair which passed this test, the exact geometrical intersection test is applied (Particles p and q collide only if qp rr < ap +aq,

where is Euclidean norm.). Since the overlap test for AABBs is computationally cheaper than for spheres, performing such test first usually increases the performance.

5.2. Performance test

In this section we present numerical results on

the performance of the algorithm when applied for bi-disperse particle systems, i.e., two different sizes, as will be considered for the segregation case in the next section. For such systems, the cell sizes of the two-level grid can be easily selected as the two di-ameters of each particle species. However, for some

for all 1, ,start endi i ic c c i d

1dc h h p

Page 13: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 207 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

situations this may be not as efficient as the use of the single-level Linked-Cell method, as we show below. How the algorithm performs for poly-disperse systems and how to select optimal cell sizes and number of levels for such systems is shown in (Ogarko & Luding, 2012).

We use homogeneous and isotropic disordered systems of colliding elastic spherical particles in a unit cubical box with hard walls. The motion of particles is governed by Newton’s second law with a linear elastic contact force during overlap. For sim-plicity, every particle undergoes only translational motion (without rotation) and gravity is set to zero. For more details on numerical procedure and prepa-ration of initial configurations see (Ogarko & Lud-ing, 2012).

We consider a bi-disperse size distribution with the same volume of small and large particles. This distribution can be characterized by only one param-eter, , which is the ratio between small and large particle radius, i.e., in this convention 0 < 1. The considered systems have volume fraction close to the jamming density. Namely, the volume fraction of systems with = 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2 is 0.631, 0.635, 0.642, 0.652, 0.665, 0.682, 0.703, 0.723, respectively. For the influence of the volume fraction on the performance of the algo-rithm, see (Ogarko & Luding, 2011).

Fig. 6. The speed-up S of the two-level grid relative to the sin-gle-level grid (Linked-Cell method) for bidisperse systems with different size ratios . The number of particles used is N = 128 000 for > 0.4, N = 256 000 for = 0.3 and N = 768 000 for = 0.2. Three independent runs were performed for every and the average CPU time values are used for the calculation of S.

Figure 6 shows the speed-up S of the two-level grid relative to the single-level grid (Linked-Cell method). For similar sizes of particles, i.e., > 0.7,

the use of the two-level grid slightly (within 40%) slows down the performance of the algorithm. This is due to the overhead associated with cross-level tests. With increasing difference in particle size, i.e., decreasing , the speed-up is increasing. For > 0.7 the speed-up exceeds unity and the use of the two-level grid becomes advantageous. The maximum speed-up of 22 is achieved in the case of the lowest considered = 0.2. Much higher speed-up is ex-pected for < 0.2.

6. MICRO-MACRO FOR SEGREGATING FLOWS

6.1. Background Except for the very special case of all particles

being identical in density and size, segregation ef-fects can be observed in granular materials. In both natural and industrial situations, segregation plays an important, but poorly understood, role on the flow dynamics Iverson, 2003; Shinbrot et al, 1999). There are many mechanisms for the segregation of dissimi-lar grains in granular flows; however, segregation due to size-differences is often the most important (Drahun & Bridgewater, 1983). We will focus on dense granular chute flows where kinetic sieving (Middleton, 1970; Savage & Lun, 1988) is the dom-inant mechanism for particle-size segregation. The basic idea is: as the grains avalanche down-slope, the local void ratio fluctuates and small particles preferentially fall into the gaps that open up beneath them, as they are more likely to fit into the available space than the large ones. The small particles, there-fore, migrate towards the bottom of the flow and lever the large particles upwards due to force imbal-ances. This was termed squeeze expulsion by Sav-age and Lun (1988).

The first model of kinetic sieving was developed by Savage and Lun (1988), using a statistical argu-ment about the distribution of void spaces. Later, Gray and Thornton (2005) developed a similar mod-el from a mixture-theory framework. Their deriva-tion has two key assumptions: firstly, as the different particles percolate past each other there is a Darcy-style drag between the different constituents (i.e., the small and large particles) and, secondly, particles falling into void spaces do not support any of the bed weight. Since the number of voids available for small particles to fall into is greater than for large particles, it follows that a higher percentage of the small particles will be falling and, hence, not sup-

Page 14: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 208 –

porting any of the bed load. In recent years, this segregation theory has been developed and extended in many directions: including the addition of a pas-sive background fluid (Thornton et al., 2006) the effect of diffusive remixing (Gray & Chugunov, 2006), and the generalization to multi-component granular flows (Gray & Ancey, 2011). We will use the two-particle size segregation-remixing version derived by Gray and Chugunov (2006); however, it should be noted that Dolgunin and Ukolov (1995) were the first to suggest this form, by using phe-nomenological arguments. The bi-dispersed segrega-tion remixing model contains two dimensionless parameters. These in general will depend on flow and particle properties, such as: size-ratio, material properties, shear-rate, slope angle, particle rough-ness, etc. One of the weaknesses of the model is that it is not able to predict the dependence of the two parameters on the particle and flow properties. Here are summarized the main results of (Thornton, 2013), where the ratio of these parameters was de-termined from DPM simulations.

The two-particle segregation-remixing equation (Gray & Chugunov, 2006) takes the form of a non-dimensional scalar conservation law for the small particle concentration - as a function of the spatial coordinates x , y and z , and time t .

(27)

where Sr is the dimensionless measure of the segre-gation-rate, whose form in the most general case is discussed in Thornton et al. (2006) and Dr is a di-mensionless measure of the diffusive remixing. In (27), is used to indicate a partial derivative, and the ‘hat’ a dimensionless variable. x is the down-slope coordinate, y the cross-slope and z normal to the base coordinate. Furthermore u , v and w are the dimensionless bulk velocity components in the x , y and z directions, respectively.

The conservation law (27) is derived under the assumption of uniform porosity and is often solved subject to the condition that there is no normal flux of particles through the base or free surface of the flow.

We limit our attention to small-scale DPM simu-lations, periodic in the x and y-directions, and inves-tigate the final steady-states. Therefore, we are inter-

ested in a steady state solution to (27) subject to no-normal flux boundary condition, at z = 0 (the bot-tom) and 1 (the top), that is independent of x and y . Gray and Chugunov (2006) showed that such

a solution takes the form:

0 0

0 0 0

1 exp exp1 exp 1 1 exp exp

s s

s s s

P z PP P z P

(28)

where Ps = Sr/Dr is the segregation Peclet number and 0 is the mean concentration of small particles. This solution represents a balance between the last two terms of (27) and is related to the logistic equa-tion. In general, Ps will be a function of the particle properties, and we will use DPM to investigate the dependence of Ps on the particle size ratio = ds/dl.

It should be noted that has been defined such that it is consistent with the original theory of Sav-age and Lun (1988); however, with this definition only values between 0 and 1 are possible. Therefore, we will present the results in terms of -1, which ranges from 1 to infinity.

6.2. The Micro-Macro transition

Figure 7 shows a series of images from the DPM

simulations at different times and values of -1. The simulations take place in a box, which is periodic in x and y, is 5ds wide and 83.3ds long, inclined at an angle of 25 to the horizontal. The base was created by adding fixed small particles randomly to a flat surface. The simulations are performed with 5000 flowing small particles and the number of large par-ticles is chosen such that the total volume of large and small particles is equal, i.e., 0 = 0:5 (to within the volume of one large particle).

Figure 8 shows a fit of equation (28) to the small particle volume fraction for several cases. The fit is performed using non-linear regression as imple-mented in MATLAB. The t is reasonable in all cas-es, especially considering there is only one degree of freedom, Ps. From these plots it is possible to obtain Ps as a function of -1 and this was found to be giv-en by:

1max 1 exp 1sP P k (29)

where k = 5.21 is the saturation constant and Pmax = 7.35.

ˆ ˆ ˆ

ˆ ˆ ˆ ˆu v w

t x y z

1ˆ ˆ ˆr rS Dz z z

Page 15: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 209 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

6.3. Future directions For the segregation case the micro-macro transi-

tion has been shown to be useful in establishing the relations between the parameters that appear in the continuum descriptions and the material parameters of the particles. Additionally, in this case, we high-lighted a discrepancy between the particles simula-tions and theory, see figure 8, i.e. the inflection point near the base in the simulation concentration pro-files. Further analysis of the simulation data has shown that this discrepancy arises because the parti-cle diffusion is not constant with depth, as assumed

by the model. Therefore, for this situation the model has to be improved to capture the full dynamics in these situations.

From a modeling point of view one of the open-topic at the moment is the determination of segrega-tion shallow-water models, see e.g. (Woodhouse, 2012), but this is beyond the scope of this review.

7. CONCLUSIONS AND FUTURE PERSPECTIVE

Here, we have shown that continuum parameters such as the macroscopic friction can be accurately

Fig. 7. A series of snapshots from the DPM simulations with large (orange) and small (blue) particles. The rows correspond to distinct particle sizes and columns to different times. Along the top row -1 = 1.1, middle row -1 = 1.5 and bottom row -1 = 2; whereas, the left column is for t = 1, middle t = 5 and right t = 60.

Fig. 8. Plots of the small particle volume fraction as a function of the scaled depth fz . The black lines are the coarse-grained DPM

simulation data and the blue lines are the fit to equation (28) produced with MATLAB’s non-linear regression function. For the fit only Ps is used as a free parameter. Dotted lines shows the 95% confidence intervals for the fit.

Page 16: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 210 –

extracted from particle simulations. We have shown that the micro-macro transition can be achieved us-ing small particle simulations, i.e., we can determine the closure relations for a continuum model as a function of the microscopic parameters. Here, this one-way coupling from micro- to macro-variables was achieved for steady uniform flows, but can in principle be used to predict non-uniform, time-dependent flows, provided that the variations in time and space are small. Comparisons with large-scale experiments and large DPM simulations are needed to determine the range of parameters for which the steady uniform closure laws hold, as indicated in figure 1a.

However, for strongly varying flows, such as ar-resting flows, avalanching flows, flow near bounda-ries or near abrupt phase changes (dead zones, shocks), no closure relations in functional form are known. Ideally, the full 3D granular flow rheology could be determined in the full parameter space and then introduced into a pure continuum solver. How-ever, since the parameter space is just too wide and situations can and will occur that are not covered by a systematic parameter study, other strategies and approaches can be thought of. For such interesting situations, where the rheology enters unknown re-gimes, or where changes are too strong and/or fast, a two-way coupling to a particle solver is a valid approach. If these complex regions are small, one can use a two-way boundary coupling, where a par-ticle solver is used in the complex region and a con-tinuum solver in the remaining domain, with an overlapping region where both solvers are used and where the two methods are coupled by using suitable boundary conditions (Markesteijn, 2011).

Alternatively, if the complex regions are too large to be solved by particle simulations, one can use a continuum solver where a small particle simu-lation is run each time the closure relations need to be evaluated (Weinan et al., 2007). This particle simulation is two-way coupled to the continuum solution in the sense that it has to satisfy the parame-ters set by the continuum solution (such as height, depth-averaged velocity and depth-averaged velocity gradient and boundary conditions) and return the closure relations (such as friction and velocity shape factor). Both alternative strategies provide plenty of unsolved challenges in communicating between discrete and continuous "worlds" concerning no-menclature, parameters, boundary conditions and their respective control.

The next versions of both the in-house continu-um solver hpGEM (van der Vegt et al.) DPM code Mercury (Thornton et al.), are designed such that they can be easily coupled and hence used to form the basis of a granular two-way coupled code.

Acknowledgements. The authors would like to

thank the late Institute of Mechanics, Processes and Control, Twente (IMPACT) for the primary finan-cial support of this work as part of the research pro-gram “Superdispersed multiphase flows". The DPM simulations performed for this paper are undertaken in Mercury-DPM, which was initially developed within this IMPACT program. It is primarily devel-oped by T. Weinhart, A. R. Thornton and D. Krijgsman as a joint project between the Multi-Scale Mechanics (Mechanical Engineering) and the Math-ematics of Computational Science (Applied Mathe-matics) groups at the University of Twente. We also thank the NWO VICI grant 10828, the DFG project SPP1482 and B12 the Technology Programme of the Ministry of Economic Afairs (STW MuST project 10120) for financial support.

REFERENCES

Allen, M.P., Tildesley, D.J., 1989, Computer simulations of liq-uids, Clarendon Press, Oxford.

Babic, B., 1997, Average balance equations for granular materials, Int. J. Eng. Science, 35 (5), 523-548.

Bokhove, O., Thornton, A.R., 2012, Shallow granular flows, in: Fernando, H.J., ed., Handbook of environmental fluid dy-namics, CRC Press.

Cui, X., Gray, J.M.N.T., Johannesson, T., 2007, Deflecting dams and the formation of oblique shocks in snow avalanches at flateyri, JGR, 112.

Cundall, P.A., Strack, O.D.L., 1979, A discrete numerical model for granular assemblies, Geotechnique, 29, 47-65.

Dalbey, K., Patra, A.K., Pitman, E.B., Bursik, M.I., Sheridan, M.F., 2008, Input uncertainty propagation methods and hazard mapping of geophysical mass flows, J. Geophys. Res., 113.

Denlinger, R.P. Iverson, R.M., 2001, Flow of variably fluidized granular masses across three-dimensional terrain, 2. nu-merical predictions and experimental tests, J. Geophys Res., 106 (B1), 533-566.

Dolgunin, V.N., Ukolov, A.A., 1995, Segregation modelling of particle rapid gravity flow, Powder Tech., 83 (2), 95-103.

Drahun, J.A., Bridgewater, J., 1983, The mechanisms of free surface segregation, Powder Tech., 36, 39-53.

Ericson, C., 2004, Real-time collision detection (The Morgan Kaufmann Series in Interactive 3-D Technology), Morgan Kaufmann Publishers Inc., San Francisco.

Form, W., Ito, N., Kohring, G.A., 1993, Vectorized and parallel-ized algorithms for multi-million particle MD-simulation, International Journal of Modern Physics C, 4(6), 1085-1101.

Page 17: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 211 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

GDR Midi, 2004, On dense granular flows, Eur. Phys. J.E., 14, 341-365.

de Gennes, P.G., 2008, From rice to snow, in: Nishina Memorial Lectures, vol. 746 of Lecture Notes in Physics, Springer, Berlin/Heidelberg, 297-318.

Gilbert, L.E., Ertas, D., Grest, G.S., Halsey, D., Levine, T.C., Plimpton, S.J., 2001, Granular flow down an inclined plane: Bagnold scaling and rheology, Phys. Rev. E., 64 (051302).

Goldhirsch, I., 2010, Stress, stress asymmetry and couple stress: from discrete particles to continuous fields, Granular Matter, 12 (3), 239-252.

Gray, J.M.N.T., Ancey, C., 2011, Multi-component particle-size segregation in shallow granular avalanches, J. Fluid Mech., 678, 535-588.

Gray, J.M.N.T., Chugunov, V.A., 2006 Particle-size segregation and diffusive remixing in shallow granular avalanches, J. Fluid Mech., 569, 365-398.

Gray, J.M.N.T., Cui, X., 2007, Weak, strong and detached oblique shocks in gravity driven granular free-surface flows, J. Fluid Mech., 579, 113-136.

Gray, J.M.N.T., Tai, Y.C., Noelle, S., 2003, Shock waves, dead zones and particle-free regions in rapid granular free sur-face flows, J. Fluid Mech., 491, 161-181.

Gray, J.M.N.T., Thornton, A.R., 2005, A theory for particle size segregation in shallow granular free-surface flows, Proc. Royal Soc. A, 461,1447-1473.

Hakonardottir, K.M., Hogg, A.J., 2005, Oblique shocks in rapid granular flows, Phys. Fluids.

Hockney, R.W., Eastwood, J.W., 1981, Computer simulation using particles, McGraw- Hill, New York.

Hungr, O., Morgenstern, N.R., 1984, Experiments on the flow behaviour of granular materials at high velocity in an open channel, Geotechnique, 34 (3).

Irwing, J.H., Kirkwood, J.G., 1950, The statistical mechanical theory of transport processes, iv. the equations of hydro-dynamics, The Journal of Chemical Physics.

Iverson, R.M., 2003, The debris-flow rheology myth, in: Ricken-mann and Chen, eds, Debrisow hazards havards mitiga-tion: Mechanics, prediction and assessment, Millpress, 303-314.

Iwai, T., Hong, C.W., Greil, P., 1999, Fast particle pair detection algorithms for particle Simulations, Int. J. Modern Phys-ics C, 10(5), 823-837.

Jenkins, J.T., Savage, S.B., 1983, A theory for the rapid flow of identical, smooth, nearly elastic, spherical particles, Jour-nal Fluid. Mech., 130, 187-202.

Luding, S., 2004, Micro-macro models for anisotropic granular media, in: Vermeer, P.A., Ehlers, W., Herrmann, H.J. Ramm, E., eds, Micro-macro models for anisotropic granular media, Balkema A.A., Leiden, 195-206.

Luding, S., 2008, Cohesive, frictional powders: contact models for tension, Granular Matter, 10 (4), 235-246.

Luding, S., Laetzel, M., Volk, W., Diebels, S., Hermann, H.J., 2001, From discrete element simulations to a continuum model, Comp. Meth. Appl. Mech. Engng., 191, 21-28.

Lun, C.K.K., Savage, S.B., Jeffrey, D.J., Chepurniy, N., 1984, Kinetic theories for granular flow: inelastic particles in couette flow and slightly inelastic particles in a general flow field, J. Fluid Mech, 140, 223.

Markesteijn, A.P., 2011, Connecting molecular dynamics and computational fluid dynamics, Ph.D. Thesis, University of Delft.

Middleton, G.V., 1970, Experimental studies related to problems of flysch sedimentation, in: Lajoie, J., ed., Flysch Sedi-mentology in North America, Business and Economics Science Ltd, Toronto, 253-272.

Moore, M., Wilhelms, J., 1988, Collision detection and response for computer animation, Computer Graphics (SIGGRAPH '88 Proceedings), 22 (4), 289-298.

Munjiza, A., 2004, The combined finite-discrete element method, John Wiley & Sons Ltd.

Muth, B., Mueller, M.K., Eberhard, P., Luding, S., 2007, Collision detection and administration for many colliding bodies, Proc. DEM07, 1-18.

Ogarko, V., Luding, S., 2010, Data structures and algorithms for contact detection in numerical simulation of discrete par-ticle systems, Proc. 6th Word Congress on Particle Tech-nology WCPT6, Nuremberg.

Ogarko, V., Luding, S., 2011, A study on the influence of the particle packing fraction on the performance of a multi-level contact detection algorithm, in: Onate, E., Owen, D.R.J., eds, II Int. Conf. on Particle-based Methods - Fundamentals and Applications, PARTICLES 2011, Bar-celona, Spain, 1-7.

Ogarko, V., Luding, S., 2012, A fast multilevel algorithm for contact detection of arbitrarily polydisperse objects, Com-puter Physics Communications, 183 (4), 931-936.

Pesch, L., Bell, A., Sollie, W.E.H., Ambati, V.R., Bokhove, O., Vegt, J.J.W., 2007, hpgem a software framework for dis-continous galerkin finite element methods, ACM Transac-tions on Mathematical Software, 33 (4).

Pouliquen, O., 1999, Scaling laws in granular flows down rough inclined planes, Phys. Fluids, 11 (3), 542-548.

Pouliquen, O., Forterre, Y., 2002, Friction law for dense granular flows: application to the motion of a mass down a rough inlined plane, J. Fluid Mech., 453, 131-151.

Raschdorf, S., Kolonko, M., 2011, A comparison of data struc-tures for the simulation of polydisperse particle packings, Int. J. Num. Meth. Eng., 85, 625-639.

Savage, S.B., Hutter, K., 1989, The motion of a finite mass of material down a rough incline, Journal Fluid. Mech., 199, 177-215.

Savage, S.B., Lun, C K.K., 1988, Particle size segregation in inclined chute flow of dry cohesionless granular material, J. Fluid Mech., 189, 311-335.

Schofield, P., Henderson, J.R., 1982, Statistical mechanics of inhomogenous fluids, Proc. R. Soc., 379, 231-246.

Shen, S., Atluri, S.N., 2004, Atomic-level stress calculation and continuum molecular system equivalence, CMES, 6 (1), 91-104.

Shinbrot, T., Alexander, A., Muzzio, F.J., 1999, Spontaneous chaotic granular mixing, Nature, 397 (6721), 675-678.

Stadler, J., Mikulla, R., Trebin, H.R., 1997, IMD: A software package for molecular dynamics studies on parallel com-puters, Int. J. Modern Physics C, 8 (5), 1131-1140.

Thatcher, U., 2000, Loose octrees, in: deLoura, M., ed., Charles River Media.

Thornton, A.R, Weinhart, T., Krijgsman, D., Luding, S., Bokhove, O., Mercury md. http://www2.msm.ctw.utwente.nl/ athornton/MD/.

Thornton, A.R, Gray, J.M.N.T., Hogg, A.J., 2006, A three phase model of segregation in shallow granular free-surface flows, J. Fluid Mech., 550, 1-25.

Thornton, A.R, Weinhart, T., Luding, S., Bokhove, O., 2012, Modelling of particle size segregation: Calibration using

Page 18: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 212 –

the discrete particle method. Submitted to Int. J. Mod. Phys. C., 23(8), 1240014.

Thornton, A.R, Weinhart, T., Luding, S., Bokhove, O., 2013, Friction dependence of shallow granular flows from dis-crete particle simulations. Submitted to Eurp. Phys. Letts.

Todd, D.,B., Evans, D.J., Daivis, P.J., 1995, Pressure tensor for inhomogeneous fluids, Phys. Rev. E, 52(2).

van der Vegt, J.J.W., Thornton, A.R., Bokhove, O., hpgem. http://www2.msm.ctw.utwente.nl/athornton/hpGEM/.

Vreman, A.W., Al-Tarazi, M., Kuipers, A.M., van Sint Annaland, M., Bokhove, O., 2007, Supercritical shallow granular flow through a contraction: experiment, theory and simu-lation, JFM, 578, 233-269.

Walton, O.R., Braun, R.L., 1986, Viscosity, granular temperature, and stress calculations for shearing assemblies of inelastic, frictional disks, Journal of Rheology, 30 (949).

Weinan, E., Engquist, B., Li, X., Ren, W., Vanden-Eijnden, E., 2007, Heterogeneous multiscale methods: a review, Communications in Computational Physics, 2 (3), 367-450.

Weinhart, T., Thornton, A.R., Luding, S., Bokhove, O., 2012a, Closure relations for shallow granular flows from particle simulations, Granular Matter, 14, 531-552.

Weinhart, T., Thornton, A.R., Luding, S., Bokhove, O., 2012b, From discrete particles to continuum fields near a bounda-ry, Granular Matter, 14, 289-294.

Williams, J.R., O’Connor, R., 1999, Discrete element simulation and the contact problem, Archives of Computational Methods in Engineering, 6 (4), 279-304.

Williams, J.R., Stinton, A.J., Sheridan, M.F., 2008, Evaluation of the Titan 2D two-phase flow model using an actual event: Case study of the vazcun valley lahar, Journal of

Volcanology and Geothermal Research, 177, 760-766. Woodhouse, M., Thornton, A.R., Johnson, C., Kokelaar, P., Gray,

J.N.M.T., 2012, Segregation induced fingering instabili-ties in granular avalanches, JFM, 709, 543-580.

WIELOSKALOWE METODY DLA

WIELOSKŁADNIKOWYCH MATERIAŁÓW ZIARNISTYCH

Streszczenie W artykule przedstawiono postęp w zrozumieniu prze-

pływu przesypywanych materiałów ziarnistych, jaki został ostatnio osiągnięty dzięki technikom modelowania wielos-kalowego. Na początku omówiono metodę dyskretnych cząstek (ang. Discrete particle method - DPM) i wyjaśniono w jaki sposób należy konstruować ciągłe pole na podstawie dys-kretnych danych tak, aby model był spójny z makroskopową zasadą zachowania masy i pędu. Zaprezentowano też nową metodę wykrywania kontaktu, która może być wykorzystana do wieloskładnikowych materiałów sypkich o rozmiarze cząstek zmieniających się w zakresie rzędów wielkości. Pokazano jak zaawansowane symulacje DPM mogą być zastosowane do uzyskiwania zależności dla modelu ciągłego (mapowanie zmiennych i funkcji między skalami mikro i makro). To umożliwiło rozwój modeli kontinuum zawierających informację o mikrostrukturze materiałów sypkich bez potrzeby robienia dodatkowych założeń.

Przejście mikro-makro przedstawiono dla dwóch prob-lemów płynięcia materiału sypkiego. Pierwszym jest płynięcie w płytkim zsypie, gdzie główną niewiadomą w modelu kontin-uum jest makro współczynnik tarcia. W pracy badano jak ten współczynnik zależy od własności przepływających cząstek i powierzchni, względem której cząstki przepływają. Drugim analizowanym problemem jest segregacja przy zsypie poli-dyspersyjnych cząstek. W obydwu analizowanych problemach rozważono krótkie okresy stacjonarne w symulacji DPM, aby otrzymać realistyczne, wzajemnie dopełniające się zależności opisujące ciągłe własności materiału.

W pracy omówiono również problem dokładności i po-prawności opisanych wzajemnie dopełniających się zależności dla złożonych problemów dynamicznych. Problemy te są od-ległe od omówionych wcześniej rozwiązań dla krótkich okresów stacjonarnych, dla których te zależności były otrzymywane. Dla prostych przypadków zastosowanie zdefiniowanych wzajemnie dopełniających się zależności dawało poprawne wyniki. W bar-dziej skomplikowanych sytuacjach potrzebne są nowe, bardziej zaawansowane rozwiązania, w których makrokontinuum i mikro dyskretny model są połączone w sposób dynamiczny ze sprzęże-niem zwrotnym.

Received: September 20, 2012 Received in a revised form: November 4, 2012

Accepted: November 21, 2012

Page 19: CMMS_nr_2_2013

213 – 218 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

NUMERICAL ASPECTS OF COMPUTATIONAL HOMOGENIZATION

MARTA SERAFIN*, WITOLD CECOT

Cracow University of Technology, ul. Warszawska 24, 31-155 Kraków *Corresponding author: [email protected]

Abstract

Computational homogenization enables replacement of a heterogeneous domain by an equivalent body with effective material parameters. Approach that we use is based on two-scale micro/macro analysis. In the micro-scale heterogeneous properties are collected in so-called representative volume elements (RVE), which are small enough to satisfy separation scale condition, but also large enough to contain all information about material heterogeneity. In the macro-scale the ma-terial is assumed as a homogeneous with the effective material parameters obtained during RVE analysis. The coupling between both scales is provided at the selected macro-level points, which are associated to independent RVE. Then, ap-proximation of solution in the whole domain is performed. Even though such a homogenization significantly reduces the time of computation, the efficiency and accuracy of the analysis are still not trivial issues. In the micro-level it is required to guarantee accurate representation of heterogeneity and at both scales the optimal number of degrees of freedom should be used.

The paper presents application of one of the most efficient numerical techniques, i.e. automatic hp-adaptive FEM that enables a user to obtain error-controlled results in rather short time; assessment of homogenization error, that is cru-cial for determination of parts of the body, where homogenization cannot be used and the hp-mixed FEM discretization details. Key words: homogenization, representative volume element, adaptive finite element method, mixed FEM

1. INTRODUCTION

Even though numerical homogenization speeds up solution of real-life problems for heterogeneous materials the time of computation may be very large, especially if nonlinearity is accounted for. Therefore, we discuss in this paper certain numerical aspects that should increase efficiency of computational homoge-nization (Feyel, 2003; Gitman, 2006; Kouznetsova et al., 2004) without loosing its accuracy.

Numerical analysis is performed by the automat-ic hp-adaptive version of FEM (Demkowicz et al., 2002). It is well known that the method gives the fastest convergence for linear problems. We have confirmed that one may expect the same situation

for inelastic problems (Serafin and Cecot, 2012) and it may be used both at the micro and macro-levels.

Since stresses are of primary interest we decided to use the mixed FEM, in which stresses are approx-imated directly, rather than by derivatives of dis-placements. The stability of this approach is a diffi-cult task (Arnold et al., 2007) but efficient and stable hp-mixed FEM is possible if appropriate weak for-mulation and shape functions are used.

In order to increase reliability of the results the error of homogenization is estimated (Cecot et al., 2012). We propose assessment of homogenization error by additional analyses in selected subdomains with boundary conditions determined by the homog-enized solution. Residuum of the differential equa-

Page 20: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 214 –

tion for heterogeneous body may also be used as another criterion for detection of subdomains with large discrepancy between the exact and homoge-nized solutions.

2. ADAPTIVE FINITE ELEMENT METHOD

In this paper automatic hp-adaptive finite ele-ment method, proposed by Demkowicz et al. (2002), is used for numerical analysis. Its main adventage is exponential convergence, superior to h and p adapta-tion techniques, where only algebraic convergence may be obtained.

This automatic mesh adaptation was successfully used for various linear problems. Its key point is an appropriate strategy of anisotropic h, p or hp mesh refinement. The strategy proposed by Demkowicz et al. (2002) is based on the interpolation error esti-mate, which is a good upper bound of the best ap-proximation error that in turn, for coercive problems by the Cea's lemma, is the upper bound for the actual approximation error. The aforementioned interpola-tion error is estimated making use of a fine mesh solution ( / , ), denoted here for the sake of brevity by that serves as a substitute for the exact solution. Such an ''exact'' solution is interpolated locally by the possible new hp-refined meshes. The difference between and its interpolant approxi-mates the interpolation error and the optimal aniso-tropic mesh refinement is determined in such a way that the reduction of the interpolation error per num-ber of additional degrees of freedom is maximal. It means that for the coarse mesh the optimal (h, p or hp) refinement is determined by maximizing the following expression ω ∏ ∏ , and

ω M , max , ω M , (1)

with additional assumption that the mesh is one-irregular, where M , , M , denote arbitrary and the optimal new meshes, respectively; ∏ , ∏ , denote H projection-based interpolants on the current and optimal meshes, respectively; N , N are the numbers of degrees of freedom in optimal and current meshes. The maximization is performed by search over a suitable subset of all possible hp refinements for every coarse mesh element.

Thus, the algorithm of adaptation approach starts with the solution of the problem on the current (coarse) mesh ( / , ). Then, the refinement in

both h and p is performed and the optimal mesh is selected by maximization of the function ω defined by equation (1). For large problems computation of the fine mesh solution may be time-consuming. However, only partially convergent solution ob-tained by e.g. a fast two-grid solver may be used to guide the optimal hp-refinement.

In this paper convergence of hp adaptation strat-egy for elastic-plastic problems is examined for two-scale modeling and some modifications of the algo-rithm are proposed. According to literature (Barthold et al., 1998; Cecot, 2007; Gallimard, 1996), inelastic deformations should be accounted for in a special way in a-posteriori error estimates in order to obtain appropriate stress approximation accuracy. Addi-tional h-refinement is proposed along elastic-plastic interface, which is a place of lower solution regulari-ty. Such an algorithm is called here modified (auto-matic) hp-refinement.

Fig. 1. RVE. Boundary conditions, elastic and plastic zones. No-penetration boundary conditions were assumed on the left and bottom sides. Zero and constant loading (220 MPa) were ap-plied at the top and right edges.

Table 1. Material parameters.

Material parameters inclusion matrix

Young modulus (GPa) 300 100

Poisson ratio 0.3 0.3

Yield strength (MPa) 300 200

Hardening coefficient (GPa) 30 10

To verify the efficiency of automatic hp-adaptive

FEM for inelastic problems RVE with cylinder-like inclusion was analyzed. A quarter of the domain was considered (figure 1). Both materials underwent elastic-plastic deformations. More precisely plane strain problem with the Mises yield condition, nor-mality rule with kinematic hardening and boundary conditions, specified in figure 1, are considered. Material parameters are collected in table 1. After additional h-refinements of elements, which con-tained both elastic and plastic parts, elastic-plastic zone was successfully detected. Comparison of meshes obtained by original and modified hp

Page 21: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 215 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

algorithm is shown in figure 2. Convergence history is presented in figure 3. On the basis of this and other, not presented here, examples one may conclude that the original automatic hp algorithm works well for inelastic prob-lems, even though elastic-plastic interface is not detected in accurate way.

NDOF = 21048 NDOF = 54052

Fig. 2. RVE. Mesh after 40 steps of original and 20 steps of modi-fied hp-refinements (grey scale indicates order of approximation).

Fig. 3. RVE. Convergence of error norm.

3. MIXED FINITE ELEMENT METHOD

Mixed method enables independent approxima-tion of at least two fields. Such approximation of stresses is useful in multiscale computations, where homogenization is based on evaluation of stress in the micro-scale. However, stable mixed finite ele-ments for solid mechanics are very difficult to con-struct. They have to provide symmetry of stress ten-sor and continuity of only traction across interele-ment boundaries, rather than all the stress compo-nents. One may use a modified Hellinger-Reissner principle, in which stress tensor symmetry is en-forced in a weak way (Arnold et al., 2007; Qiu & Demkowicz, 2009).

The problem has the following form: find ∈ H div, Ω,M , ∈ L Ω, V and ∈ L Ω,K such that:

∀ ∈ H div, Ω,M , ∀ ∈ L Ω,V , ∀ ∈ L Ω,K where M is the space of second order, but not neces-sary symmetric tensors, K is the space of skew-symmetric tensors.

The example of a tensor shape function that ena-bles obtaining continuous tractions at every point of element interfaces may have the following form

σx=x,ξy,η y,ξy,η-x,ξx,η -y,ξx,η

/detJ (3)

where ξ and η denote coordinates of master element, x and y are coordinates of physical element, J stands for Jacobi matrix of transformation between those elements.

One may also use formula obtained by Piola transformation that gives the following stress shape function possessing the same properties

σx=x,ξ y,ξ0 0 /detJ (4)

The main difference between both formulas is in definition of degrees of freedom (traction in nor-mal/tangent or x/y directions). Comparison of trac-tion components continuity of approximation by equations (3) and (4) is shown in figure 4.

The RVE with square-like inclusion in plane strain state was considered as a benchmark for the proposed mixed approximation. A quarter of the domain was taken into account (figure 5) with ap-propriate boundary conditions. Deformations were only in elastic range. In this example inclusion was much weaker than the matrix (Young modulus: 0.002GPa and 200GPa, respectively; Poisson ratio for both materials: 0.3). Simulation of tension was performed and this way effective material parame-ters were computed. Convergence of effective Young modulus obtained by classical FEM (dis-placement formulation) and mixed method (dis-placement-stress formulation) is compared in figure 6. One may observe, that if we use both methods with small number of degrees of freedom we are

τ:C-1σ dΩ divτ.u dΩ τ.p dΩ τn. ds∂ΩΩΩΩ v.divσdΩΩ v.bdΩΩq.σdΩ 0Ω

(2)

Page 22: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 216 –

able to evaluate effective value with a good accuracy as an average of both solutions.

Fig. 5. RVE. Boundary conditions. No-penetration boundary conditions were assumed on the left and bottom sides. Zero and constant loading were applied at the top and right edges.

4. MODELING ERROR ESTIMATION

Replacement of a heterogeneous body by a ho-mogenized one with effective material parameters introduces an error, related to incomplete infor-mation about the microstructure. Thus, it may hap-pen that the homogenization should not be used for certain part of the domain. A global explicit estimate using the homogenized (coarse) elasticity tensor and the actual fine-scale elasticity tensor was proposed by Zohdi et al. (1996). However, it is not able to capture local error. A scale adaptation strategy de-veloped by Temizer and Wriggers (2011) was used to account for loss of accuracy for the finite defor-

mation analysis of macrostructures. In that method the adaptation zones that correspond to regions with high strain-gradients, are identified based on a post-

processing step on the homoge-nized solution. Subsequently, for critical zones, exact microstructur-al representation is introduced, without intermediate models.

Here, other possibilities of modeling error estimation are pre-sented, since this issue is essential for reliability of the results. One is based on the solution of heteroge-neous problem in selected subdo-mains with boundary conditions assumed on the basis of homoge-nized solution. Such an error esti-

mate consists of a few steps: 1) compute the homogenized solution u0 in domain Ω, 2) select a part of the body, where the error should

be estimated and consider heterogeneous mate-rial in this subdomain,

3) solve the boundary value problem for cut-off heterogeneous subdomain A with boundary con-

ditions resulted from homogenized solution; ob-tain the solution uI and consider it in a smaller truncated part of the selected heterogeneous do-main ,

4) estimate the error between solution uI and ho-mogenized one u0 in subdomain . Another possibility is based on residuum, by

analogy to the explicit residual error indicator for FEM solution (Babuska & Miller, 1987; Babuska & Rheinboldt, 1978). The proposed algorithm of ho-mogenization error estimation may be stated in the following way: 1) compute effective material parameters for ho-

mogenized domain,

Fig. 4. Traction components continuity. The arrows along the common edge denote trac-tions evaluated for stress fields of adjacent elements along this edge.

Fig. 6. Effective Young modulus.

Page 23: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 217 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

2) solve homogenized problem with effective properties to obtain u0,

3) compute residuum R of equilibrium equation div for heterogeneous body in each macro-scale finite element, where denotes body forces, stands for stress tensor (in fact,

is a distribution; its norm, which we are inter-ested in, may be bounded by the norms of regu-lar part of and jumps of the first derivatives of the solution),

4) compute jumps of tractions at interfaces of finite elements ( on ∂ ) and material components ( on ∂ ⊂ ∂ ), where ∂ denotes common edge of adjacent elements and ∂ stands for material (m1, m2) interfaces,

5) wherever ηR h‖R‖0 h1/2‖Je‖0 h1/2‖Jm‖0 is large, homogenization should not be used. The L-shaped domain in plane strain state was

considered to perform numerical tests. The metal matrix was reinforced by cylinder-like inclusions, distributed uniformly. Assumed boundary condi-tions, as well as material distribution, are shown in figure 7.

Fig. 7. L-shaped domain. Boundary conditions and material distribution.

A part of the domain (reentrant corner of L-shaped domain – figure 8) was selected and estima-tion by subdomain solutions was used. Estimated error for displacements is as follows ‖u u0‖0,B‖u0‖0,B

0,13%

The residual modeling error estimate was also used for this example. Residuum in each macro-element of homogenized body was calculated and error indicators were evaluated. Assumed admissible error enables selection of subdomains, which should not be homogenized (figure 9). Automatic hp re-finements enables obtaining a mesh that captures all the details of heterogeneity in selected subdomains.

Fig. 8. L-shaped domain. Boundary displacements for cut-off heterogeneous domain A resulted from homogenized solution.

Fig. 9. L-shaped domain with homogeneous and heterogeneous subdomains. New material distribution after excluding, on the basis of residual error estimate, two subdomains from homoge-nization and appropriate FE mesh (gray scale indicates order of approximation).

5. CONCLUSIONS

The paper presents application of two efficient numerical techniques, i.e. automatic hp mesh adapta-tion and mixed approximation for inelastic two-scale analysis. Prospects of both approaches were posi-tively verified by solution of selected numerical examples. In order to obtain reliable results the error introduced by homogenization was estimated giving information about quality of the results. Numerical improvements of computational homogenization, presented in this paper, will be used in further appli-cations of the approach in modeling of elastic-plastic composites

Acknowledgment. This research was supported

by the National Science Center under grant 2011/01/B/ST6/07312.

REFERENCES

Arnold, D.N., Falk, R., Winther, R., 2007, Mixed finite element methods for linear elasticity with weakly imposed sym-metry, Mathematics of Computations, 76, 1699-1723.

Babuska, I., Miller, A., 1987, A feedback finite element method with a posteriori error estimation. Part 1, Comp. Meth. Appl. Mech. Engng, 61, 1-40.

Babuska, I., Rheinboldt, W.C., 1978, Error estimates for adap-tive finite element computations, Int. J. Num. Meth. Engng, 12, 1597-1615.

Page 24: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 218 –

Barthold, F., Schmidt, M., Stein, E., 1998, Error indicators and mesh refinements for finite-element-computations of elastoplastic deformations, Computational Mechanics, 22, 225-238.

Cecot, W., 2007, Adaptive FEM analysis of selected elastic-visco-plastic problems, Comp. Meth. Appl. Mech. Engng, 196, 3859-3870.

Cecot, W., Serafin, M., Klimczak, M., 2012, Reliability of computational homogenization, International US-Poland Workshop: Multiscale Computational Modeling of Cementitious Materials, ISBN 978-83-7242-667-3, 183-194.

Demkowicz, L., Rachowicz, W., Devloo, Ph., 2002, A fully automatic hp-adaptivity, Journal of Scientific Compu-ting, 17, 127-155.

Feyel, F., 2003, A multilevel finite element method (FE2) to describe the response of highly non-linear structures us-ing generalized continua, Comput. Methods Appl. Mech. Engrg., 192, 3233-3244.

Gallimard, L., Ladeveze, P., Pelle, J.P., 1996, Error estimation and adaptivity in elastoplasticity, Int. J. Numer. Meth. Engng, 39, 189-217.

Gitman, I., 2006, Representative Volumes and Multi-scale Mod-elling of Quasi-brittle Materials, PhD thesis, Delft Uni-versity of Technology.

Kouznetsova, V., Geers, M., Brekelmans, W., 2004, Size of a representative volume element in a second-order compu-tational homogenization framework, International Jour-nal for Multiscale Computational Engineering, 2, 575-598.

Qiu, W., Demkowicz, L., 2009, Mixed hp-finite element method for linear elasticity with weakly imposed symmetry, Comp. Meth. Appl. Mech. Engng, 198, 3682-3701.

Serafin, M., Cecot, W., 2012, Self hp-adaptive FEM for elastic-plastic problems, International Journal for Numerical Methods in Engineering (submitted).

Zohdi, T.I., Oden, J.T., Rodin, G.J., 1996, Hierarchical model-ing of heterogeneous bodies, Comput. Methods Appl. Mech. Engrg., 138, 273-298.

Temizer, I., Wriggers, P., 2011, An adaptive multiscale resolu-tion strategy for the finite deformation analysis of mi-croheterogeneous structures, Comput. Methods Appl. Mech. Engrg., 200, 2639-2661.

NUMERYCZNE ASPEKTY HOMOGENIZACJI

OBLICZENIOWEJ

Streszczenie Homogenizacja komputerowa pozwala na zastąpienie mate-

riału niejednorodnego przez ośrodek jednorodny z efektywnymi parametrami materiałowymi. Podejście to bazuje na analizie w dwóch skalach – mikro i makro. W skali mikro rozważa się materiał niejednorodny w tzw. reprezentatywnym elemencie objętościowym (RVE), który jest na tyle mały, żeby zapewnić separację skal, równocześnie na tyle duży, aby informacje o wszystkich niejednorodnościach zostały w nim zawarte. W skali makro zakłada się materiał jednorodny z efektywnymi parametrami materiałowymi otrzymanymi z analizy RVE. Transfer informacji między skalami dokonywany jest w wybra-nych punktach skali makro, powiązanymi z niezależnymi RVE. Następnie dokonywana jest aproksymacja rozwiązania w skali makro. W ten sposób redukowany jest czas obliczeń, jednak należy zagwarantować poprawność uzyskanych wyników. W skali mikro niezbędne jest dokładne odzwierciedlenie mikro-struktury, a w obu skalach optymalnej liczby stopni swobody.

W pracy zastosowano dwie efektywne techniki numeryczne, t.j. hp-adaptacyjną wersję metody elementów skończonych, która pozwala na uzyskanie wiarygodnych wyników w stosun-kowo krótkim czasie oraz sformułowanie wielopolowe pozwala-jące uzyskać możliwie dokładną aproksymację naprężeń, będą-cych głównym celem obliczeń. W publikacji zawarto również możliwości oszacowania błędu homogenizacji, niezbędnego do wyznaczenia obszarów, w których homogenizacja nie powinna być stosowana ze względu na zbyt duży błąd.

Received: September 20, 2012 Received in a revised form: October 31, 2012

Accepted: November 9, 201

Page 25: CMMS_nr_2_2013

219 – 225 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

A POD/PGD REDUCTION APPROACH FOR AN EFFICIENT PARAMETERIZATION OF DATA-DRIVEN MATERIAL

MICROSTRUCTURE MODELS

LIANG XIA1,2, BALAJI RAGHAVAN1, PIOTR BREITKOPF1*, WEIHONG ZHANG2

1 Laboratoire Roberval, UMR 7337 UTC-CNRS, UTC, Compiègne, France 2 Engineering Simulation and Aerospace Computing (ESAC), NPU, Xi’an, China

*Corresponding author: [email protected]

Abstract

The general idea here is to produce a high quality representation of the indicator function of different phases of the material while adequately scaling with the storage requirements for high resolution Digital Material Representation (DMR). To this end, we propose a three-stage reduction algorithm combining Proper Orthogonal Decomposition (POD) and Proper Generalized Decomposition (PGD)- first, each snapshot pixel/voxel matrix is decomposed into a linear com-bination of tensor products of 1D basis vectors. Next a common basis is determined for the entire set of microstructure snapshots. Finally, the analysis of the dimensionality of the resulting nonlinear space yields the minimal set of parameters needed in order to represent the microstructure with sufficient precision. We showcase this approach by constructing a low-dimensional model of a two-phase composite microstructure. Key words: parameterization of microstructure, homogenization, voxel approaches, storage costs, material uncertainties

1. INTRODUCTION

The constant increase of computing power cou-pled with ever-easier access to high-performance computing platforms enable the computational in-vestigation of realistic multi-scale problems. On the other hand, the progress in material science allows us to control the material microstructure composi-tion to an unprecedented extent. In order to accurate-ly predict the performance of structures employing such new materials, it becomes essential to include the effects of the microstructure variation when modeling the structural behavior. The material mi-crostructure varies in a much smaller length scale than the actual structural size. Typical examples include polycrystalline materials, functionally grad-ed materials, porous media, and multiphase poly-mers. To perform a multi-scale analysis involving

these materials, one must first construct models of the microstructure variations to be used as input in the subsequent parameterized analysis. This analysis may be performed in order to answer questions such as: how does microstructure affect the structural behavior? What particular microstructure yields the desired performance? How do the inherent material uncertainties propagate at the structural level?

Given a set of 2D/3D geometrical instances (snapshots) of the Representative Volume Element (RVE) generated from a priori known information about feasible microstructure topology, a reduced-order parameterized representation is formulated, which could be directly usable in multi-scale finite element procedures. This problem, similar to those encountered in computer vision image processing and statistical data analysis, requires the right pro-jection space in which the set of snapshots generates

Page 26: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 220 –

a low-dimensional manifold that can then be fitted by a parametric hyper-surface.

Basically, the parameterized representation of the material microstructure may be split into two major steps: 1. Low-dimensional model construction of various

material properties within the RVE, 2. Using this model as an input to the Finite Ele-

ment analysis at the RVE level. The work introduced in this paper utilizes the

available sources of information about the variability of the microstructure to construct a set of possible realizations of the material internal structure and mechanical properties. This initial data may be pro-vided by numerical models (Voronoi diagrams, cel-lular automata,…) or may be experimentally ob-tained by using imaging techniques: computer to-mography (CT), magnetic resonance imaging (MRI), etc. (Ghosh & Dimiduk, 2011). This set of instances – called snapshots, is mapped into a low-dimensional continuous space that spans all the ad-missible variations permitted by the experimental data. By exploring this low-dimensional equivalent surrogate space, one is essentially sampling over the parameterized space of material topology variations that satisfy the (simulated) experimental data. This low-dimensional representation is subsequently used as an input model in the finite element analysis ei-ther via a mesh or a voxel-based variant of finite element model at microstructural scale. The major advantage of this approach is a significant reduction in complexity due to the analysis in a low-dimensional space and results in a drastic reduction in the critical memory requirement.

This representation may then be used in FE2-type multi-scale analysis (Feyel, 2003), in continu-ous optimization procedures (Sigmund & Torquato, 1997) or within a stochastic framework to include the effects of the input uncertainties at the material level in order to understand how they propagate and affect the performance of the structure (Velamur Asokan & Zabaras, 2006).

The literature reveals little investigation into de-veloping parameterized material representations. Ganapathysubramanian and Zabaras (2007) used Principal Component Analysis (PCA) to represent data-driven representation of the property variations of random heterogeneous media. A general frame-work has been proposed by Raghavan et al. (2010) for POD-based shape representation by separating the space variables and the design variables in the space optimization context. In this paper, an exten-

sion of this technique for material microstructures is made by separating the space variables. The overall goal is to linearly scale the storage requirements in order to cope with the ever-increasing resolution of microstructural snapshots.

The paper is organized in the following manner: section 2 presents the general description of the overall problem. POD and PGD-POD approaches are introduced in Section 3 and 4, respectively. Sec-tion 5 compares the reconstruction errors obtained in using the two approaches based on the approxima-tion on a defined two-phase periodic microstructure media. The paper ends with concluding comments and suggestions for future work.

2. PROBLEM DESCRIPTION

Consider a material sample defined by a real-valued continuous N × N or discrete density map

, ,s s x y v defined over a square, periodic domain = [0,1] × [0,1] and depending on a set of (possibly unknown) parameters (design variables) pv . The problem addressed is how to identify a smallest set of design variables given a set of M snapshots (instances, realizations, samples or images) of the microstructure. The snapshots are given by N × N matrices , 1k k MS such that , , ,k i j n n ks i jS x y v , , 1i j N with nx , ny defining a regular grid of data points.

1. POD MODEL OF THE MICROSTRUCTURE

Consider a set of discrete 2D snapshots Sk, k = 1…M. Each snapshot matrix is stored in a column vector sk of length N2. The full set of snapshots is stored in an N2 × M matrix 1 M s s s s , cen-

tered around the mean snapshot 1M

kk M s s . The interpolation may be performed using standard 2D finite element shape functions ,x y .

, , ,Tk ks x y x yv s (1)

The snapshot matrix may be decomposed by Singular Value Decomposition (SVD)

1T

M s s s s UDV (2)

with the U and V matrices containing respectively the left and right singular vectors.

Taking a (reasonable) assumption that 2M N , we define a projection basis composed of the first m

Page 27: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 221 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

left singular vectors 1 1: ;1:m N m U . An arbitrary centered snapshot is approached by

, Tk k k k s s s s (3)

The relative Frobenius norm of reconstruction error of the whole matrix of snapshots is

(4)

so for dropped off modes i, i = m + 1,…,M the error is given explicitly by the sum of corresponding di-agonal entries i of D, squared.

It is thus possible to build a mapping giving for each microstructure instance generated by a set of design variables pv , a unique image m . There are however two problems to be solved: an arbitrary m does not necessarily yield

an pv , the dimension of p is not a priori known.

Both problems are to be addressed here by build-ing a manifold of admissible microstructures. This is done locally for a snapshot by analyzing the local dimensionality of the space spanned by the coeffi-cient vectors k . The interpolation of coefficients is then formulated as a minimization problem con-strained by manifolds of admissible shapes.

Combining the above with the spatial interpola-tion functions, we express s = s(x,y,v) for an arbi-trary value of design variables not belonging to the initial set and at any point in , possibly not on the grid. The bi-level representation allows us to sepa-rate the space variables x and y from the design vari-ables v. Assuming that the basis vectors, defined at the grid points, are constant and that only the coeffi-cients k depend on the design variables, we rewrite equation (1)

(5)

with , ,Ts x y x y s and , ,Ti ix y x y .

The storage requirements for this approach are 2m N m M which may be a problem when the

resolution of the grid increases. This is even more critical when extending the approach to 3D with the

storage becoming 3m N m M , which is defi-nitely not scalable as for N = 1024 and M = 100 some 800GB (considering 8 byte floating point numbers) are necessary for the modes only and for N = 4096 and M = 1000, with ~50TB memory which goes well beyond the capacity of current work-stations. The second problem concerns the sensitivi-ty of the modes with respect to the order in which matrix terms are rearranged into a vector.

Therefore, there is a clear need for an approach not requiring renumbering of the matrix entries and scaling better with increasing resolution. The pro-posed algorithm is presented in the next section.

2. “PGD-POD” MICROSTRUCTURAL MODEL WITH SEPARATED SPACE VARIABLES

In the previous section, an interpolation form separating design variables from the space coordi-nates is proposed. In this section, a further separa-tion is performed to the individual space dimensions x and y. Given an N N grid of sampling points and a matrix snapshot of the density map , , , , , 1k i j n n ks x i y j i j N S v , the spatial interpolation of , , ks x y v is

, , Tk ks x y x y v S (6)

by means of the standard 1D finite element shape functions x and y .

Instead of rearranging the matrix snapshots Sk into a column vector sk and performing SVD directly on the data set, the idea is to transform each snap-shot Sk to matrix Ek of reduced dimensions in terms of two separate basis and .

Tk k S S E (7)

The dimension of Ek is now mx × my, decided by the dimensions of the two basis: mx of and my of , the process of basis extraction is given after-wards (equation10-13). For an arbitrary point ,x y of the snapshot, we can rewrite equation (6)

(8)

where , Ts x y x y S , Ti ix x and

Tj jy y . For arbitrary value of design and

1

1

TMk k k kk

TMk kk

s s s ss s s s

21 1

21 1

M Mi i ii m ii mF

M Mi i i ii iF

1

, , ,m

Ti i

is x y x y

v s v

1

, ,m

i ii

s x y x y

v , , , + T Tk ks x y s x y x y v E

,1 1

, +yx mm

i j k i ji j

s x y E x y v

Page 28: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 222 –

space variables, the continuous model may now be expressed as

,1 1

, ,yx mm

i j i ji j

s x y s E x y

v v (9)

Such a 1D approximation in each direction thus transforms every N × N snapshot Sk into a mx × my compressed matrix Ek. The process of extraction of the two separate basis and is introduced in the following. We start by the truncated SVD decompo-sition of each individual snapshot

Tk k k k S S U D V (10)

where only the first mk left and right singular vectors corresponding to the first mk largest singular values are calculated. Next, we arrange all sub-matrices

1: ;1:k kN mU and 1: ;1:k kN mV into two 1 MN m m matrices

(11)

and we apply SVD separately to *U and *V

* *T T

U U U V V Vand U U D V V U D V (12) The two separate basis and are composed

of the first mx and my left singular vectors from UU and from VU , respectively.

(13)

Thereafter, the matrices kU and kV in equation (10) may be now be approximated in terms of the two separate basis and

, ,T Tk k k k k k k kand U A A U V B B V

(14)

Substitute above into equation (10) and we have equation (7) where matrix T

k k k kE A D B is the only term depending on the design variables. Once Ek are obtained, a similar approximation approach is fol-lowed to that in section 3. Transforming Ek to col-umn vector ek of length mx × my, SVD reduction is performed on a full set mx × my × M matrix [e1…eM]. Then a new mapping connection is built between each microstructure pv and the coeffi-cients m calculated by SVD on [e1…eM]. The storage requirements for this approach are x y x ym m N m m m m M which is signif-icantly less than m × N2 + m × M. When extending

this approach into 3D, the storage requirement would decrease drastically from m × N3 + m × M to x y z x y zm m m N m m m m m M .

The reconstruction error of all snapshots in this approach is calculated in a similar way as in equa-tion (4). Note that, two factors, mx and my in the basis extraction and m in SVD reduction of [e1…eM], actually influence the reconstruction error in this approach.

5. PARAMETERIZATION OF A TWO-PHASE RVE

In this section, a comparison is given for both proposed approaches on a commonly analyzed peri-odic two-phase microstructure pattern as shown in figure 1. Snapshots of such a pattern could be uti-lized to model various types of materials and we have a list of them in table 1. The periodic snapshot is defined by two parameters controlling the radii of two groups of circular inclusions. 500 snapshots of resolution 256×256 are randomly generated for a local approximation.

Table 1. List of possible material types.

Material Type Reference

Porous Aluminum Kouznetsova et al. 2001

Reinforced Alloys Ghosh et al. 2001

Fiber Composites Zeman and Sejnoha 2001

Quasi-brittle Materials Nguyen et al. 2010

Fig. 1. Two-phase, two-parameter microstructure snapshots.

5.1. POD approach SVD is performed directly on the data set. From

figure 2, it can be seen that the α’s form a set of two-dimensional manifolds rather than a cloud of points in 3D space regardless of the particular triplet of modes used, clearly indicating that the design do-main is parameterized by two parameters t1, t2, as is known in priori. This means that α1 = α1(t1, t2), α2 = α2(t1, t2).... The surfaces formed by α’s could be interpreted as the set of all possible “constraints” (direct geometric constraints, technological con-

*1 11: ;1: 1: ;1:M MN m N m andU U U

*1 11: ;1: 1: ;1:M MN m N mV V V

1 1: ;1:xm U xN m and U

1 1: ;1:ym V yN m U

Page 29: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 223 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

straints, etc., that are difficult to be expressed math-ematically). Therefore, new microstructure snapshot could be generated parametrically in a reduced di-mension by taking the surface coordinates t1 and t2 as design variables.

5.2. PGD-POD approach By extracting basis vectors in both directions,

each snapshot matrix S is firstly transformed to re-duced matrix E. For clearer visualization of the re-construction error in the basis extraction process, we assume mx = my and the errors versus mx and my are given in figure 3. Three curves correspond to cases that different number of modes are retained after the SVD on each snapshot. The curve with data points marked using circles gives the error when retaining 100% projection energy, i.e., m1 = m2 = … = mM = 256. The curve with the data points marked using

squares is the result of retaining 99.9% projection energy and the average number of retained modes is about 147. When 99% projection energy is retained, the average number of modes is reduced to 122. Such a reduction makes the SVD in equation (13) and (14) a little bit more effective as shown in the

curve marked with triangles, but an original error of 1% is introduced at the same time (when mx = my = 256).

Note that, mx doesn’t have to equal my, especially when anisotropic materials are considered and obviously, the RVE considered here is anisotropic. Figure 4 shows the reconstruction error ver-sus mx and my chosen inde-pendently. This means that a further reduction in storage requirement may be achieved by choosing mx and my inde-pendently.

Only the case retaining 99.9% projection error is considered for each snapshot SVD. By setting mx = my = 180, the data set of 256×256 matrices S’s is transformed to reduced 180×180 matrices E’s with an introduced error of 4%.

In the next step, POD is performed on the reduced data set of E’s. The dimen-sion of the data set is reduced from 2562×500 to 1802×500. The manifolds formed by the β’s (see figure 5) is similar to

that in the previous approach, which indicates the microstructure has two parameters and also mani-fests the fact that PGD maintains the interrelation-ship among snapshots. The microstructure can be parameterized again by taking surface coordinates t1 and t2 as design variables, now within a reduced storage requirement.

Fig. 2. 2D α-Manifolds for the data set.

Fig. 3. Reconstruction errors versus x ym m in three cases.

Fig. 4. Reconstruction errors versus mx and my in case of square marked curve in figure 3.

Page 30: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 224 –

5.3. Comparison of the reconstruction errors

A comparison of the reconstruction errors ob-

tained using the two approaches is given in this sec-tion. Figure 6 plots the two error curves against the number of retained modes in both approaches. The curve marked with squares, error in POD approach, is calculated by equation (4). The curve marked with triangles, error in PGD-POD approach, is calculated similarly with a presupposition of retaining mx = my = 180 in the PGD process.

Fig. 6. The reconstruction errors of the two approaches.

Fig. 7. Comparison of the reconstructed snapshots, from left to right: Original Snapshot, POD reconstructed and PGD-POD reconstructed.

Figure 6 shows that the two curves match each other, varying in a similar trend except for the red curve converging to zero while the blue curve con-verges to a value of 4% due to the reduction in the PGD process. If the expected reconstruction error is no less than 10%, then the numbers of modes needed for the two approaches are close to each other. Con-sidering an error of 20%, we retain the first 5 modes for both approaches. The result of the reconstruction is shown in figure 7, where there is no obvious dif-ference between the two reconstructed snapshots. Thereafter, the PGD-POD approach could achieve a

similar result in a much less storage requirement compared to POD approach.

6. CONCLUSIONS AND PERSPECTIVES

A three-stage model reduction scheme combin-ing Proper Orthogonal Decomposition (POD) and Proper Generalized Decomposition (PGD) has been developed to build a reduced order model for the efficient parameterization of material microstruc-tures. The proposed model maintains the high quali-ty of the reconstructed microstructure snapshots with a significantly reduced storage requirement com-pared to the traditional POD model. With a reduced order model of this type, additional investigations may be conducted into the prediction and optimiza-tion of material properties using microstructures.

Acknowledgement. The authors acknowledge

the support of OSEO in the scope of the FUI OASIS project F1012003Z, Labex MS2T and of the China Scholarship Council.

REFERENCES

Feyel F., 2003, A multiscale finite element method (FE2) to describe the response of highly non-linear structures us-ing generalized continua, Comput. Methods Appl. Mech. Eng., 192, 3233-3244.

Ganapathysubramanian, B., Zabaras, N., 2007, Modeling diffu-sion in random heterogeneous media: Data-driven mod-els, stochastic collocation and the variational multiscale method, J. Comput. Phys., 226, 326-353.

Ghosh, S., Dimiduk, D., 2011, Computational Methods for Microstructure-Property Relationships, Springer, New York.

Ghosh, S., Lee, K., Raghavan, P., 2001, A multi-level computa-tional model for multi-scale damage analysis in compo-site and porous materials, Int. J. Solids Struct., 38, 2335-2385.

Kouznetsova, V., Brekelmans, W.A.M., Baaijens, F.P.T., 2001, An approach to micro-macro modeling of heterogeneous materials, Comput. Mech., 27, 37-48.

Nguyen, V.P., Lloberas-Valls, O., Stroeven, M., Sluys, L.J., 2010, On the existence of representativevolumes for sof-teningquasi-brittlematerials – A failure zone averaging scheme, Comput. Methods Appl. Mech. Eng., 199, 3028-3038.

Fig. 5. 2D β-Manifolds for the reduced data set.

Page 31: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 225 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Raghavan, B., Breitkopf, P., Villon, P., 2010, POD-morphing, an a posteriori grid parametrization method for shape optimization, Eur. J. Comput. Mech., 19, 671-697.

Sigmund, O., Torquato, S., 1997, Design of materials with extreme thermal expansion using a three-phase topology optimization method, J. Mech. Phys. Solids, 45, 1037-1067.

Velamur Asokan, B., Zabaras, N., 2006, A stochastic variational multiscale method for diffusion in heterogeneous ran-dom media, J. Comput. Phys., 218, 654-676.

Zeman, J., Sejnoha, M., 2001, Numerical evaluation of effective elastic properties of graphite fiber tow impregnated by polymer matrix, J. Mech. Phys. Solids, 49, 69-90.

PARAMETRYZACJA CYFROWEJ REPREZENTACJI

MIKROSTRUKTURY MATERIAŁU POPRZEZ REDUKCJĘ POD/PGD

Streszczenie Ogólna idea pracy polega na automatycznej generacji precy-

zyjnej funkcji reprezentującej topologię i geometrię poszczegól-nych faz materiału celem uzyskania modelu obliczeniowego o minimalnej liczbie parametrów. W tym celu proponujemy trzystopniowy algorytm redukcji obrazu, łączący cechy dekom-pozycji POD i PGD. W pierwszym etapie, macierz obrazu re-prezentatywnego elementu objętościowego jest rozłożona na liniową kombinację tensorowych produktów jednowymiaro-wych wektorów bazowych. Następnie budujemy wspólną bazę dla całego zbioru obrazów mikrostruktury. W trzecim etapie, analiza wymiaru otrzymanej rozmaitości topologicznej daje minimalny zestaw parametrów potrzebnych do reprezentowania mikrostruktury z odpowiednią dokładnością. Jako przykład podajemy budowę niskowymiarowego modelu dwufazowej mikrostruktury kompozytu.

Received: October 16, 2012 Received in a revised form: November 8, 2012

Accepted: November 23, 2012

Page 32: CMMS_nr_2_2013

226 – 230 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

LOCAL NUMERICAL HOMOGENIZATION IN MODELING OF HETEROGENEOUS VISCO-ELASTIC MATERIALS

MAREK KLIMCZAK*, WITOLD CECOT

Cracow University of Technology, ul. Warszawska 24, 31-155 Kraków *Corresponding author: [email protected]

Abstract

The main objective of this paper is to present the prospects of application of local numerical homogenization to vis-co-elastic problems. Local numerical homogenization is one of the computational homogenization methods, proposed by Jhurani in 2009 for linear problems. Its main advantage is that it can be used in the case of modeling of heterogeneous materials with neither distinct scales separation nor periodic microstructure. The main idea of the approach is to replace of a group of many small finite elements by one macro element. The coarse element stiffness matrix is computed on the ba-sis of the fine element matrices. In such a way one obtains a coarse mesh approximation of the time consuming fine mesh solution.

In this paper we use the Burgers model to describe inelastic deformations, however any other constitutive equations may be applied. In the 1D case the Burgers model is interpreted as a combination of a spring and a dashpot and it is main-ly used for bituminous materials (e.g. binders or asphalt mix). Because of rheological effects a transient analysis is neces-sary.

Integration of local numerical homogenization with Burgers model should improve modeling of heterogeneous vis-co-elastic materials. The approach we propose can reduce the computational cost of the analysis without deterioration of the modeling reliability. We present numerical results of 1D and 2D analysis for selected problems that provide compari-son between the ‘brute force’ FEM approach and local numerical homogenization in application to modeling of heteroge-neous visco-elastic materials in order to validate the technique. Key words: local numerical homogenization, visco-elasticity, Burgers model

1. INTRODUCTION

Most of new materials are composites of differ-ent kinds. Before their implementation they are thoroughly tested. Numerical tests can significantly reduce the cost of design process by eliminating some laboratory or ‘in situ’ experiments. However, numerical modeling of heterogeneous materials is a challenging task, especially in the case of inelastic deformations and non-periodic material microstruc-ture. ‘Brute force’ FEM analysis, which accounts for all details of the material heterogeneity, is either highly time consuming or even impossible. There-fore, various approaches to evaluation of effective

material properties and composite response are pro-posed (e.g. Geers et al., 2003; Mang et al., 2008). One of them is the computational homogenization. It is used to bridge ‘neighboring’ analyses scales by the concept of representative volume element (RVE). This approach was developed for example by Geers and his collaborators (Geers et al., 2003). However, we make use of the local numerical ho-mogenization not based on RVE concept (Jhurani, 2009; Jhurani & Demkowicz, 2009; Klimczak & Cecot, 2011), which is presented briefly below.

Page 33: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 227 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

2. LOCAL NUMERICAL HOMOGENIZATION

Local numerical homogenization is one of com-putational homogenization methods. It was proposed by Chetan Jhurani (Jhurani, 2009) for linear prob-lems. Mainly linear elasticity was discussed. Unlike other computational homogenization methods, local numerical homogenization is not based on the con-cept of RVE. The main advantage of this method is that no separation of scales condition has to be ful-filled. It means that the ratio of microscale character-istic dimension and macroscale characteristic dimen-sion does not have to be much smaller than the uni-ty. Moreover, periodicity of the material is not re-quired. Therefore this method is suitable to model asphalt pavement structures, which is the subject of our interest.

Local numerical homogenization is thoroughly presented in Jhurani’s dissertation (Jhurani, 2009). We would like to give an overview of this method and its main steps in context of linear problems. Algorithm of the method consists of the follow-ing steps: assume ‘trial‘ effective material properties of the

analyzed heterogeneous domain, solve the auxiliary coarse mesh problem (it is

advised to use adaptive FEM but it is not obliga-tory),

refine the coarse mesh within its every element in order to match all the heterogeneities (fine and coarse meshes are naturally compatible then),

find the coarse mesh element effective matrices knowing the fine mesh element matrices,

assemble coarse mesh element effective matri-ces,

solve the coarse mesh problem. The core of the algorithm is evaluation of effec-

tive coarse mesh element matrices. Let us focus on a single coarse mesh element of the analyzed domain. Then we refine the mesh within this coarse mesh element to capture all details of the heterogeneity. For a stiffness matrix NxNRK , and load vector

NRf , the fine mesh local FEM equation is:

fKu (1)

Its solution is equal to :

0ufKu † (2)

where †K denotes the Moore – Penrose pseudo-

inverse of K and 0u is an arbitrary vector in the

null space of K . Analyzing the same problem at macroscale level

we use effective stiffness matrices of MxMRK ^

(M≤N) and coarse scale load vector defined in terms

of f as fAf T^

( NxMRA - a chosen interpo-lation operator for a respective element). The coarse – scale solution is expressed in the following way:

0

^^†^^ufKu (3)

where ^†K denotes the Moore – Penrose pseud-

oinverse of ^K , and 0

^u is an arbitrary vector in the

null space of ^K . The difference between (2) and (3)

is equal to:

)()( 0

^

0

†^†

^uAufAKAKuAu T (4)

Thus, we can express , up to a constant, the error

NRe as:

fAKAKe T )(†^

† (5)

Finally, minimization of the above expression (enhanced with the regularization term) with respect

to ^†K (Jhurani, 2009) leads to the effective coarse

mesh element stiffness matrix ^K . This routine

needs to be repeated for every coarse mesh element. Then the coarse mesh problem can be solved in the standard way.

1. BURGERS MODEL

Visco-elastic Burgers model is commonly used for modeling of bituminous materials. Its 1D scheme is presented in figure 1.

It is a material model, which efficiently simu-lates all of the most important response characteris-tics of bituminous materials, i.e. elastic, viscous, and visco-elastic. Additionally, it is may be easily im-plemented numerically. The total strain increment ( ) in Burgers model is the sum of the elastic

Page 34: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 228 –

( E ), visco-elastic ( VE ) and viscous strain

increments ( V ) then:

VVEE (6)

Fig. 1. Burgers model.

All of the above increments are presented in de-tails by Collop et al. (2003) for both 1D and 3D case. Algorithm for time integration of the Burgers model is as follows: evaluate elastic solution as a trial one for a time

step ti , calculate inelastic strain increments, update load vector considering the ‘impact’ of

inelastic strain increments, solve the problem and calculate total strain in-

crement, if the difference between updated total strain

increment and a trial is negligible and the difference between updated solution and a trial one is also negligible, go to the next time step; otherwise – go to the first iteration step.

2. INTEGRATION OF LOCAL NUMERICAL HOMOGENIZATION WITH BURGERS MODEL

Modeling of heterogeneous visco-elastic materi-als also requires time-consuming transient analysis. In this chapter we present the prospects of local nu-merical homogenization in application of visco-elastic Burgers model. Analysis becomes much more complex as we have to ‘homogenize’ at every time step.

Algorithm of the proposed approach for known load history and known constituents characteristics is as follows (for each time step): solve the elastic problem using local numerical

homogenization according to the routine pre-sented in chapter 2,

consider each coarse mesh element to be an independent problem: refine the mesh within this element, assume boundary conditions on the basis of elastic solution and solve this local problem at time ti,

update the coarse mesh load vector considering inelastic contribution,

assemble coarse mesh element matrices and updated load vectors,

solve the coarse mesh problem. Whole routine requires then solving several local

visco-elastic problems instead of the global problem.

3. NUMERICAL RESULTS

In this chapter we present preliminary results of 1D and 2D numerical tests for visco-elastic materi-als. In figure 2 analyzed 1D domain is presented. All material data (for 'white' material) are the same as for the test performed by Woldekidan (2011). 'Black' material is characterized by two times weaker pa-rameters. Cross-sectional area is equal to 50 cm2. Analysis period is equal to 60 s. Load P is equal to 1.5 kN for t ≤ 15 s, then it is removed.

Results for an arbitrary time step ti are presented in figure 3. The whole domain was discretized by 10 fine mesh elements and 5 coarse mesh elements. Thus, two fine mesh elements were homogenized into one coarse mesh element. Distribution of inclu-sions is periodic for the sake of simplicity.

2D analysis was carried out for the domain pre-sented below in figure 4. It is a 2m by 4m square analyzed in plane strain state. Its bottom edge is fixed, left and right hand side edges can displace only in the vertical direction. Uniformly distributed tensile load (1 kN/m2) is applied to the upper edge. Material data were assumed in the same manner as

for the 1D test. The Poisson ratio both for the inclusion and the ma-trix is equal to 0.3.

Fig. 2. Analyzed heterogeneous visco-elastic 1D domain.

Page 35: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 229 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Vertical displacements of the upper edge are presented in figure 5.

Fig. 3. 1D example. Displacements at arbitrary time ti. Fig. 4. Analyzed 2D domain with randomly distributed

inclusions.

Fig. 5. 2D example. Vertical displacements along the upper edge.

Page 36: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 230 –

6. CONCLUSIONS

Summing up we can conclude that for the tests presented in the paper integration of local numerical homogenization with visco-elastic material models: significantly reduced the computational cost of

numerical analysis, did not introduce significant additional error to

the solution. These results encourage us to perform further

tests and obtain an effective algorithm for analyses of heterogeneous visco-elastic materials.

REFERENCES

Collop, A. C., Scarpas, A. T., Kasbergen, C., de Bondt, A., 2003, Development and finite element implementation of a stress dependent elasto-visco-plastic constitutive model with damage for asphalt, Transportation Re-search Record, 1832, 96-104.

Geers, M., Kouznetsova, V., Brekelmans, W., 2003, Multi-scale first-order and second-order computational homogeniza-tion of microstructures towards continua, International Journal for Multiscale Computational Engineering, 1, 371-386.

Jhuran i , C. K. , Demko wicz, L . , 2009, ICES Reports 09-34÷09-36, The University of Texas at Austin. (http://www.ices.utexas.edu/research/reports/).

Jhuran i , C. K. , 2009, Multiscale Modeling Using Goal-oriented Adaptivity and Numerical Homogenization, PhD thesis, The University of Texas, Austin.

Klimczak, M. , Cecot, W., 2011, Local homogenization in modeling heterogeneous materials, Czasopismo Tech-niczne, R. 108, z. 1-B, Wydawnictwo Politechniki Kra-kowskiej, 87-94.

Mang, H.A., Aigner, E., Eberhardsteiner, J., Hackspiel, C., Hellmich, Ch., Hofstetter, K., Lackner, R., Pichler, B., Scheiner, St., Stürzenbecher, R., 2008, Computational Multiscale Analysis in Civil Engineering, Proc. Conf. The Eleventh East Asia-Pacific Conference on Structur-al Engineering and Construction, D & E Drawing and Editing Services Company, Taipei, 3 – 14.

Woldekidan, M. F., 2011, Response Modeling of Bitumen, Bitu-minous Mastic and Mortar, PhD thesis, Technische Universiteit Delft.

LOKALNA HOMOGENIZACJA NUMERYCZNA

W MODELOWANIU NIEJEDNORODNYCH MATERIAŁÓW LEPKOSPRĘŻYSTYCH

Streszczenie Głównym celem niniejszego artykułu jest prezentacja moż-

liwości wykorzystania lokalnej homogenizacji numerycznej do zadań lepkosprężystych. Lokalna homogenizacja numeryczna jest jedną z metod homogenizacji komputerowej. Została zapro-ponowana przez Ch. Jhurani w roku 2009 do zagadnień linio-wych. Jej główną zaletą jest to, iż może być wykorzystana do modelowania materiałów niejednorodnych, które nie wykazują wyraźnej rozdzielności skal ani nie charakteryzują się perio-dycznością mikrostruktury. Główną cechą tego podejścia jest wykonanie homogenizacji po dyskretyzacji analizowanego obszaru. Kluczowym krokiem algorytmu jest zastąpienie grupy elementów siatki gęstej jednym elementem siatki rzadkiej. Ostatecznie wystarczy rozwiązać zadanie w obszarze zdyskrety-zowanym siatką rzadką, zamiast siatką gęstą.

W niniejszym artykule wykorzystujemy model Burgersa do opisania deformacji lepkosprężystych. Możliwe jest jednak zastosowanie innego równania konstytutywnego opisującego zachowanie materiału w czasie. W przypadku jednowymiaro-wym model Burgersa jest interpretowany jako kombinacja sprężyn i tłumików. Wykorzystywany jest głównie do modelo-wania zachowania materiałów bitumicznych (np. lepiszcza asfaltowe lub mieszanki mineralno-asfaltowe). Ze względu na reologię zagadnienia niezbędna jest wykonanie analizy niesta-cjonarnej. Powoduje to znaczne wydłużenie czasu obliczeń ze względu na konieczność rozwiązania zadania w każdej chwili czasu oraz iteracyjny charakter algorytmu.

Integracja lokalnej homogenizacji numerycznej z modelem Burgersa może poprawić sposób modelowania niejednorodnych materiałów lepkosprężystych. Proponowane przez nas podejście może ograniczyć czas obliczeń bez pogorszenia wiarygodności modelowania. Prezentujemy wyniki zadań 1D oraz 2D dla wybranych zagadnień. Porównane zostały one z wynikami podejścia “brute force”, tj. wynikami obliczeń wykonanych za pomocą MES przy pełnym uwzględnieniu mikrostruktury mate-riału. Rezultaty porównań powyższych metod pokazują, że proponowane przez nas podejście może być z powodzeniem wykorzystane do modelowania niejednorodnych materiałów lepkosprężystych, gdyż nie wprowadza znacznego dodatkowego błędu do rozwiązania obniżając jednocześnie koszt wykonywa-nych obliczeń.

Received: September 20, 2012 Received in a revised form: November 4, 2012

Accepted: November 21, 2012

Page 37: CMMS_nr_2_2013

231 – 237 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

NUMERICAL SIMULATIONS OF STRAIN LOCALIZATION FOR LARGE STRAIN DAMAGE-PLASTICITY MODEL

BALBINA WCISŁO*, JERZY PAMIN

Institute for Computational Civil Engineering, Cracow University of Technology, Warszawska 24, 31-155 Cracow, Poland

*Corresponding author: [email protected]

Abstract

This paper deals with the phenomenon of strain localization in nonlinear and nonlocal material models. Particularly, in the description of the material not only nonlinear constitutive relations (damage, plasticity) are included, but also geo-metrical nonlinearities (large strains) are taken into account. The strain localization in the analysed model has a twofold source: geometrical effects (necking) and softening due to damage of the material. To avoid pathological mesh sensitivity of numerical test results the gradient averaging is applied in the damage model. The material description is implemented within the finite element method and numerical simulations are performed for a uniaxial tensile bar benchmark. Selected results are presented for the standard and regularized continuum. Key words: strain localization, large strain, damage, plasticity, gradient-enhancement, AceGen package

1. INTRODUCTION

One of the features of materials with microstruc-ture (e.g. concrete or composites) is strain localiza-tion which is closely related to the softening of the material. The localization means that from some point the whole deformation concentrates in a nar-row zone while a major part of the structure experi-ences unloading. The strain localization has a two-fold source: geometrical effects (e.g. necking of metallic bars) or material instabilities (e.g. mi-crocracking or nonassociated plastic flow). Although the softening and the localized deformations are visible in the macroscopic material response, they have the physical origin in the evolution of the mi-crostructure.

The application of standard continuum models to these problems fails to provide an objective descrip-tion of the phenomena. For the descending stress-strain relationship (e.g. due to damage of the materi-

al) the equilibrium equations lose their ellipticity in the post-peak regime. This leads to an ill-posed boundary value problem that entails a pathological mesh-sensitivity in the numerical solution. The rea-son for the discretization-dependence in the compu-tational tests is that the localization is simulated in the possibly smallest material volume which de-pends on the assumed mesh.

The localization phenomenon associated with the softening response can be properly reproduced using enhanced continuum theories which have a non-local character and take into account higher deformation gradients in the constitutive description (Peerlings et al., 1996). The gradient averaging in-volves an internal length scale which is an additional material parameter coming from the microstructure. The parameter is usually associated with the width of the localization band and is determined for in-stance by an average grain size. The paper includes the description of the material model which incorpo-

Page 38: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 232 –

rates hyperelasticity, plasticity (with or without hardening) based on Auricchio and Taylor (1999) and gradient enhanced damage. The analysis is per-formed with the assumption of large deformations and isothermal conditions. The implicit gradient model is incorporated, which is reflected in an addi-tional partial differential equation to be solved. The paper presents the results of a computational test of localization in a tensile bar. The simulations are performed using Mathematica–based package Ace-Gen/AceFEM (Korelc, 2009). The application of the automatic code generator AceGen significantly sim-plifies implementation of the elaborated models and due to automatic computation of derivatives allows one to avoid an explicit derivation of the tangent matrix for the Newton-Raphson procedure.

2. SHORT PRESENTATION OF MATERIAL MODEL

The following simulations are performed for

a material model which involves hyperelasticity, plasticity and damage and takes into account large strains. However, damage is not directly coupled with plasticity thus depending on the assumed mate-rial parameters the model can also reproduce hypere-lasticity-plasticity or hyperelasticity-damage. The material description is developed with the assump-tion of isotropy and isothermal conditions and is based on a classical multiplicative split of the de-formation gradient into its elastic and plastic parts:

e pF F F (Simo and Hughes, 1998). The free ener-gy function in the presented model is assumed to be an isotropic function of the elastic left Cauchy-Green tensor e e eTb F F , a scalar measure of plastic flow and a scalar damage parameter :

(1 ) ( ) ( )e e p b (1) The constitutive relations of hyperelasticity are

expressed through the elastic part of the free energy function which is assumed in the following form (Simo and Hughes, 1998):

1/31 1( 1) ln( ) tr( ) 32 2 2 2

e ebe be beJ J J

b

(2)

where Jbe is the determinant of the elastic left Cau-chy-Green tensor and and are material parame-ters. The Kirchhoff stress tensor is related to the elastic left Cauchy-Green tensor be with the formula:

(3)

Damage is understood in the described model as

a degradation of the elastic free energy function in the form:

, (1 )e d e (4)

where is a scalar damage variable which grows from zero for the intact material to one for a com-plete material destruction, and which is computed from the damage growth function. In the following numerical simulations the exponential model adopt-ed from Mazars and Pijaudier-Cabot (1989) is ap-plied:

0

0( ) 1 1 exp( ( )

(5)

where and are model parameters, is a history variable calculated as max(~ , 0), where ~ = det(F) - 1 is a deformation measure which governs damage and 0 is a damage threshold.

The damage condition takes the form:

Fd (~ , ) = ~ - 0 (6)

For Fd < 0 there is no growth of damage. The plastic part of the presented model is de-

scribed in the effective stress space which means that it governs the behaviour of the undamaged skel-eton of the material. Thus the following formulation takes into account the effective Kirchoff stress ten-sor which is computed as: = /(1 - ). The plastic regime is defined through the yield function Fp which is an isotropic function of the effective Kirch-hoff stress tensor and the plastic multiplier :

Fp( , ) = f( ) – 3/2 (y - q( )) 0 (7)

The function f( )is assumed to be the Huber-Mises-Hencky equivalent stress 22f J , which depends on the second invariant of the deviatoric part of the effective Kirchhoff stress tensor t :

22

12

J t . The function q represents the isotropic

linear hardening as: q( ) = -h, where h is a harden-ing modulus.

The associative flow rule is assumed in the form:

(8)

2

ee

e

bb

12

e ev b NbL

Page 39: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 233 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

where is the Lie derivative of eb (Bonet and

Wood, 2008) and N is a normal to the yield hyper-surface.

3. GRADIENT-TYPE REGULARIZATION

A variety of approaches can be applied to pre-serve numerical results from the pathological mesh-sensitivity observed for a standard continuum model reproducing the behaviour of materials exhibiting softening, see e.g. Areias et al. (2003). In this paper a gradient regularization is applied, which is not only a computationally convenient approach but it is also motivated by micro-defect interactions. The introduction of the gradient enhancement into the material description requires the choice of a non-local parameter and the formulation of a correspond-ing averaging equation. In the literature different variables to be averaged are taken into account, e.g. the stored energy function (Steinmann, 1999) or the plastic strain measure (Żebro et al., 2008). In the described model, the local strain measure governing damage ~ is replaced with its non-local counterpart ~ which is specified by the averaging equation:

~ - l22~ = ~ (9)

with homogeneous natural boundary conditions. The parameter l appearing in equation (9) is a material-dependent length parameter called the internal length scale. The application of the gradient averag-ing to the material model including large strains is additionally difficult due to the distinction of the undeformed and deformed configuration. Thus, the averaging equation and the internal scale l can be specified in the initial or current configuration. Based on the results obtained by Steinmann (1999) and Wcisło et al. (2012) which show that the spatial averaging does not fully preserve the numerical re-sults from the dependence on the discretization, the material averaging is chosen for the following simu-lations.

4. NUMERICAL SIMULATIONS OF STRAIN LOCALIZATION

In this section the numerical exam-ples of strain localization in hyperelastic-plastic-damage model are presented. Firstly, the results for material softening due to damage are presented for standard and regularized continuum. In the second subsection the results for strain localiza-

tion due to geometrical softening in plasticity are discussed. Finally, the complex model of hyperelas-ticity-plasticity coupled with gradient damage is considered. All model variants have been imple-mented in the Mathematica-based packages AceGen and AceFEM (Korelc, 2009). The former is an au-tomatic code generator used for the preparation of finite element code whereas the latter is a FEM en-gine.

4.1. Hyperelasticity coupled with damage

The simulations of the material model including

hyperelasticity coupled with local damage are per-formed for a tensile bar with imperfection presented in figure 1. The enforced displacement and the boundary conditions preserve the uniaxial stress state. The material parameters applied in the simula-tions are: E = 200 GPa, v = 0.3, 0 = 0.011, = 0.99, = 1. In the central zone the damage threshold is assumed to be 0 = 0.01.

The results of the computational test performed for two FE discretizations with linear interpolation 20x2x2 and 40x4x4 are depicted in the first graph of figure 2. The computations for the finest mesh 80x8x8 fail for the displacement control (snap-back occurs). We can observe in figures 2 and 3 that the results significantly differ for each discretization and that the zone of strain localization covers only the middle rows of elements.

The next test is performed for the same sample but the hyperelastic model is coupled with gradient damage. The internal length parameter is assumed to be l = 3 mm. The reactions sum diagram is presented in the second graph of figure 2. It can be noticed that diagrams are close to one another, especially for the medium and the fine mesh. The deformed mesh with the damage variable distribution is presented in fig-ure 4. The simulation confirms that for the gradient model the behaviour of the sample does not depend on the adopted mesh. The width of the damage zone which is related to the internal length parameter is similar for each discretization.

vL

Fig. 1. Geometry and boundary conditions of a bar with imperfection.

Page 40: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 234 –

a)

b)

Fig. 2. Sum of reactions vs. displacement for a) hyperelasticity–damage model and b) hyperelasticity–gradient–damage model.

4.2. Hyperelasticity coupled with plasticity Firstly, the test is performed for an ideal bar with

a constant square cross-section along the length. The dimensions and the boundary conditions of the sam-ple are the same as in the previous section and three meshes are taken into account: 20x2x2, 40x4x4 and 80x8x8 elements. The material parameters applied in the test are as follows: E = 200 GPa, v = 0.3, σy = 300 MPa (perfect plasticity). The Huber-Mises-Hencky yield criterion is applied and the F - bar approach is incorporated to avoid locking (de Souza Neto et al., 2008).

The graph of the reactions sum vs end displace-ment is depicted in the first diagram of figure 5. It

Fig. 3. Evolution of Green strain Exx for meshes 20x2x2 and 40x4x4 (hyperelastic–damage model).

can be observed that in the plastic regime the dia-

gram is descending although ideal plasticity is as-sumed. It is caused by taking into account the change of a cross-section during deformation. For all discretizations the loss of stability can be observed: for the coarse mesh the phenomenon is observed the earliest, whereas for the medium and the fine mesh the necking occurs at the same time.

Figures 6 and 7 present the deformed sample with the final accumulated plastic strain distribution and the evolution of the Green strain Exx along the bar length respectively. It can be noticed that the loss of stability manifests itself in multiple necking. For each discretization the number and the arrange-ment of the strain localization zones is different.

Fig. 4. Deformed mesh and distribution of damage variable for three discretizations (hyperelastic–gradient–damage model).

Page 41: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 235 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

The test for the same sample but with the as-sumed imperfection in the middle of the bar is also performed. The imperfection is prescribed as the reduction of the yield stress to the value σy = 290 MPa. The sum of reactions diagram is presented in the second graph of figure 5. We can observe that the results depend on the adopted finite element mesh and the finer the discretization is, the less stiff the model is. In figure 8 it can be noticed that the strains localize in the middle part of the sample where the imperfection is assumed.

a)

b)

Fig. 5. Sum of reactions vs. displacement for a) the ideal bar and for b) the bar with imperfection – hyperelastic–plastic model.

It should be also mentioned, that the same results are obtained for hyperelasticity-plasticity with hard-ening where the hardening modulus has sufficiently small value, for example 0.5%E. It seems that, even in the absence of damage, regularization is necessary for large deformations and ideal plasticity or small hardening.

Fig. 7. Evolution of Green strain Exx along the bar for three meshes and ideal bar (hyperelastic–plastic model).

4.3. Hyperelasticity-plasticity coupled with gradient damage

The last test is performed for the material model

which includes both plasticity and gradient damage. The tested sample is the bar with imperfection with the following material parameters: E = 200 GPa, v = 0.3, 0 = 0.0002, = 0.99, = 1, σy = 300 MPa, h = 1%E. The imperfection is as in the previous subsection.

Figures 9 and 10 present selected results of the simulation. The reaction diagrams are close for each discretization and present the plasticity regime with hardening and reduction of the reaction forces due to damage. In the analysed test, the material softening is reproduced properly due to gradient regularization and geometrical softening does not occur because of a sufficiently large value of the hardening modulus.

Fig. 6. Final accumulated plastic strain for three meshes and ideal bar (hyperelastic–plastic model).

Page 42: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 236 –

Fig. 9. Sum of reactions vs. displacement for hyperelasticity–plasticity–gradient–damage model.

Fig. 10. Deformed mesh and damage variable distribution for discretizations 20x2x2 and 40x4x4 (hyperelasticity–plasticity–gradient–damage model).

5. CONCLUSIONS

In the paper the problem of strain localization for a material model including geometrical and ma-terial nonlinearities with the applied gradient regu-larization has been outlined. The considered model is briefly described and selected numerical results are presented. The simulations are performed for different variants of the model and exhibit geomet-rical or material softening. The hyperelastic-damage

model exhibits material softening which can cause mesh-sensitivity observed in the presented simula-tion results. The gradient averaging procedure, in-corporating an internal length parameter, allows one to properly reproduce the material behaviour.

The numerical tests reveal that for a model in-corporating ideal plasticity in large strain regime the strain localization might occur. For a sample with imperfection one zone of large strains can be pre-dicted in contrast to the ideal sample where multiple necks are formed. To prevent the numerical results from a pathological mesh-dependence, the applica-tion of the regularization of the plastic part of the model should be considered in the future work. Moreover, the work is planned to be extended to-wards thermo-mechanical coupling.

Acknowledgments. The authors acknowledge

fruitful discussions on the research with Dr K. Kowalczyk-Gajewska from IFTR PAS, Warsaw, Poland. The research has been carried out within contract L-5/66/DS/2012 of Cracow University of Technology, financed by the Ministry of Science and Higher Education.

REFERENCES

Areias, P., Cesar de Sa, J., and Conceicao, C., 2003, A gradient model for finite strain elastoplasticity coupled with damage, Finite Elements in Analysis and Design, 39, 1191-1235.

Auricchio, F., Taylor, R. L., 1999, A return-map algorithm for general associative isotropic elasto-plastic materials in large deformation regimes, Int. J. Plasticity, 15, 1359-1378.

Bonet, J., Wood, R. D., 2008, Nonlinear continuum mechanics for finite element analysis, Cambridge University Press, Cambridge.

Fig. 8. Final accumulated plastic strain for imperfect bar and three discretizations (hyperelastic–plastic model).

Page 43: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 237 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

de Souza Neto, E., Peric, D., Owen, D., 2008, Computational methods for plasticity. Theory and applications, John Wiley & Sons, Ltd, Chichester, UK.

Korelc, J., 2009, Automation of primal and sensitivity analysis of transient coupled problems, Computational Mechan-ics, 44, 631-649.

Mazars, J., Pijaudier-Cabot, G., 1989, Continuum damage theo-ry - application to concrete, ASCE J. Eng. Mech., 115, 345-365.

Peerlings, R., de Borst, R., Brekelmans, W., de Vree, J., 1996, Gradient-enhanced damage for quasi-brittle materials, Int. J. Numer. Meth. Engng, 39, 3391-3403.

Simo, J. C., Hughes, T. J. R., 1998, Computational Inelasticity. Interdisciplinary Applied Mathematics Vol. 7, Springer-Verlag, New York.

Steinmann, P., 1999, Formulation and computation of geometri-cally non-linear gradient damage, Int. J. Numer. Meth. Engng, 46, 757-779.

Wcisło, B., Pamin, J., Kowalczyk-Gajewska, K., 2012, Gradi-ent-enhanced model for large deformations of damaging elastic-plastic materials, Arch. Mech. (to be published).

Żebro, T., Kowalczyk-Gajewska, K., Pamin, J., 2008, A geo-metrically nonlinear model of scalar damage coupled to plasticity, Technical Transactions, 20/3-Ś, 251-262.

NUMERYCZNE SYMULACJE LOKALIZACJI ODKSZTAŁCEŃ DLA MODELU USZKODZENIA

SPRZĘŻONEGO Z PLASTYCZNOŚCIĄ W DUŻYCH ODKSZTAŁCENIACH

Streszczenie Artykuł dotyczy zjawiska lokalizacji odkształceń w nieli-

niowych i nielokalnych modelach materiałowych. W szczegól-ności, przedstawiony opis materiału zawiera nie tylko nielinio-we związki konstytutywne (uszkodzenie, plastyczność), ale również nieliniowości geometryczne (duże odkształcenia). Lokalizacja odkształceń w analizowanym modelu ma dwojakie źródło: efekty geometryczne (szyjkowanie) oraz osłabienie spowodowane uszkodzeniem materiału.

Zastosowanie standardowych modeli continuum nie prowa-dzi do poprawnej symulacji zachowania materiałów z osłabie-niem. Spowodowane jest to utratą eliptyczności równań równo-wagi, gdy zależność między naprężeniami a odkształceniami wchodzi na ścieżkę opadającą. W takim przypadku odkształce-nia lokalizują się w najmniejszej możliwej objętości materiału, która w symulacji numerycznej określona jest przez rozmiar elementu skończonego. Aby uniknąć patologicznej zależności wyników testów numerycznych od dyskretyzacji należy zasto-sować odpowiednią regularyzację. W niniejszej pracy zastoso-wano uśrednianie gradientowe, w którym istotną rolę odgrywa wewnętrzna skala długości. Jest to dodatkowy parametr materia-łu związany z jego mikrostrukturą, który może określać szero-kość strefy lokalizacji odkształceń.

W artykule przedstawiono zwięźle opis analizowanego mo-delu sprężysto-plastycznego sprzężonego z uszkodzeniem przy dużych odkształceniach oraz zastosowanej regularyzacji gra-dientowej. Model ten został oprogramowany w pakiecie Ace-Gen w środowisku Mathematica oraz przetestowany przy użyciu pakietu AceFEM. W pracy zaprezentowane są wybrane wyniki symulacji rozciągania pręta dla różnych wariantów przyjętego opisu materiału, w których można zaobserwować lokalizację odkształceń zarówno związaną z osłabieniem materiału jak i osłabieniem geometrycznym.

Received: October 16, 2012 Received in a revised form: October 18, 2012

Accepted: October 26, 2012

Page 44: CMMS_nr_2_2013

238 – 244 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

THE MULTI-SCALE NUMERICAL AND EXPERIMENTAL ANALYSIS OF COLD WIRE DRAWING FOR HARDLY

DEFORMABLE BIOCOMPATIBLE MAGNESIUM ALLOY

ANDRZEJ MILENIN*, PIOTR KUSTRA, DOROTA J. BYRSKA-WÓJCIK

AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Kraków, Poland *Corresponding author: [email protected]

Abstract

The problem of determination the drawing schedule of the cold drawing of thin (less than 0.1 mm) wire from the hardly deformable magnesium alloy Ax30 with the aid of the multi-scale mathematical model is examined in the paper. The special feature of the alloy Ax30 is the mechanism of fracture on the grain boundaries. It is experimentally proven that the microscopic cracks during the tension tests occurs long before the complete fracture of samples. The state of met-al, which directly precedes the appearance of these microscopic cracks, is proposed to consider it as optimum from the point of view of the restoration of plasticity with the aid of the annealing. The simulation of this state in the wire drawing process and development on this basis regimes of wire drawing is the purpose of paper. Solution of problem required the development of the fracture model of alloy in the micro scale, identification of the fracture model and its implementation into the FEM model of wire drawing. Two schedules of wire drawing are examined. The first of them is according to the results of simulation allowed the appearance of microscopic cracks. The second regime was designed so that the micro-scopic cracks would not appear during wire drawing. Experimental verification is executed in laboratory conditions on the specially developed device. The annealing was carried out before each passage. The initial diameter of billet was 0.1 mm. In the first regime it was possible to realize only 2-3 passages, after which the fracture of wire occurred. The cracks on the grain boundaries were observed in this case on the surface of wire. The second regime made it possible to carry out 7 pas-sages without the fracture, the obtained wire with a diameter of 0.075 mm did not contain surface defects, it had high plastic characteristics and allowed further wire drawing. Thus, the validation of the developed multi-scale model is exe-cuted for two principally different conditions of deformation. Key words: drawing process, multi-scale modeling, magnesium alloys

1. INTRODUCTION

This paper is devoted to the new magnesium al-loys used in medicine as a soluble implants (Heuble-in et al., 1999; Haferkamp et al., 2001; Thomann et al., 2009). Typically, these alloys contain lithium and calcium supplements. The production of thin surgical threads to stitching tissues may be an exam-ple of application of these alloys (Seitz et al. 2011; Milenin et al, 2010b). Feature of these alloys is a low technological ductility during cold forming. As shown in the previous works (Kustra et al., 2009;

Milenin et al., 2010a), a technological ductility of these alloys during cyclic processes based on a com-bination of a cold deformation and annealing is sig-nificantly lower than for most known magnesium alloys. The reasons of this fact are explained in the (Milenin et al., 2011) that there are fractures on the grain boundaries long before the fracture of the sample in the macro-scale in these alloys during cold deformation. These microcracks considerably make worse the restoration of plasticity using annealing.

Solution of the problem is proposed in the works (Milenin et al., 2010b; Milenin & Kustra, 2010). It is

Page 45: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 239 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

based on drawing by a hot die. Studies show that this method is effective for more than 0.1 mm wire di-ameter. Obtainment a thin wire is difficult because of the strong sensitivity to the velocity of drawing process. Another disadvantage is that the biocompat-ible lubricant cannot be used what becomes to be important in medical application. Thus the solution of listed problems requires in-depth study of cold drawing process for these alloys.

The aim of this work is to determine the parame-ters of the cold drawing of thin (less than 0.1 mm) wires by using a multi-scale modeling of wire draw-ing process and experimental verification of the results.

2. MECHANISM OF FRACTURE

The MgCa0.8 (0.8% Ca 99.2%Mg) alloy and its modification Ax30 (0.8%Ca, 3.0%Al, 96.2% Mg) were selected as a material for the study. The tech-nique of research of the fracture mechanism is based on stretching sample in microscope’s vacuum cham-ber. During the process of stretching changes of microstructure and microcracks nucleation are moni-toring. The experiment is described in detail in the works (Milenin et al., 2010a; Milenin et al., 2011). The test shown that these alloys crack mainly on grain boundaries. A porosity in sample appears long before the moment of fracture in macro-scale. An example of cracks for alloy Ax30 is shown in figure 1 in macro-scale (figure 1a) and micro-scale (figure 1b).

The porosity values in the stretching sample characterize the technological plasticity during mul-ti-pass drawing. It is proved that if the microcracks do not appear in a current pass, the annealing allows restoring the plasticity (Milenin et al., 2011). Other-wise, the effectiveness of annealing is much reduced and reaching of large deformation in a multi-pass process is impossible.

In figure 2 the values of porosity in a center of sample during tensile test of MgCa0.8 and Ax30 alloys are shown. The values for Az80 alloy, used in mechanical engineering, are also shown for compar-ison purpose. The figures show that in these alloys microcracks appear much earlier than in the typical magnesium alloy Az80. Thus, the increase of porosi-ty in the early stage of deformation is a fundamental difference between considered alloys and known magnesium alloys.

It follows from this that the development of the drawing technology should be made in such a way

that in an every pass the material does not have mi-crocracks. The multi-scale model of wire drawing process was proposed to solve this problem in the works (Milenin et al., 2010a; Milenin et al., 2011). For micro-scale modeling of the fracture processes the boundary element method (BEM) was used, which allows to easily simulate of the fracture of grain boundaries.

a)

b)

Fig. 1. The examples of microcracks during tensile test of Ax30 alloy: a) on the surface of the sample in macro-scale, b) in micro-scale

Fig. 2. The porosity dependence on a total sample elongation

Page 46: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 240 –

3. THE MULTI-SCALE MATHEMATICAL MODEL OF A DRAWING PROCESS

The macro-scale numerical model of drawing process is based on finite element method (FEM) and described in paper (Milenin, 2005). The micro-scale model of deformation is based on boundary element method. The macro-scale and micro-scale models are coupled in a such way that the results of simulation on macro-scale, especially stress and displacements, are the boundary conditions at the micro-scale. At the micro-scale the displacements, strains and stresses on grains boundary are comput-ed, but the most important parameter in micro-scale simulation is the damage parameter D, which is ex-plained below.

The digital representation of the microstructure in a micro-scale model in proposed BEM code is considered as a two-dimensional representative vol-ume element (RVE) which is divided into grains (figure 3). The model at the micro-scale includes the BE mesh generation based on images of a fragment of real microstructure and numerical solution at the micro-scale level.

a)

b)

Fig. 3. Photo of microstructure (a) and BE mesh (b)

The crystallographic orientation is included in the developed program by a random parameter k, which refers to change of elastic-plastic properties due to the various orientations of grains. The effec-tive plastic modulus of the material for each grain is calculated as follows:

kEeff

(1)

where: k is the random parameter, – increment of mean equivalent strain in grain; – yield stress of material in grain.

The Saint-Venant-Levy-Mises theory is used for relation between stresses and increments of strains for plastic deformation:

ijijij

32

0

(2)

where ij – the Kronecker delta, 0 – the mean

stress, ij – the increment of strain components.

The solution of boundary problem is based on the Kelvin’s fundamental solution (Crouch & Star-field, 1983) for the two-dimensional tasks and in-compressible material. The solution of boundary problem and fracture criteria are described in detail in previous works (Milenin et al., 2010a; Milenin et al., 2011).

The proposed criteria of crack initiation are based on the theory by L. M. Kaczanov and Y. N. Rabotnov (Rabotnov, 1969). This theory was suc-cessfully used in (Diard et al., 2002) for modeling of grain boundary cracking in the case of the defor-mation of the polycrystals. This model was modified to describe the crack initiation at the grain boundary:

1

0

dDD , (3)

3

2

11b

beq D

EbD

, (4)

2

02

Sneq b , (5)

where: D – damage parameter; E – Young modulus; σn – tensile (positive) component of normal stress at the boundary between two grains; σS – shear stress at the boundary between two grains; b0-b3 – empirical coefficients.

According to the equation (3)-(5), the damage parameter is computed at micro-scale for all bounda-ry elements and depends on the material and stress

Page 47: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 241 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

state. The value of parameter D varies from 0 to 1. When the value of parameter D reaches the value 1 for boundary element, the fracture criteria is met.

The crack initiation is allowed only for the inter-nal boundaries in the developed model. The outer boundaries of the domains were assigned to bounda-ry conditions and, in spite of a possible fulfillment of condition, they cannot be destroyed.

The determination of empirical parameter of fracture model at the micro-scale is based on inverse analysis of experimental data. The purpose of this analysis is to minimize the difference between the empirical and calculated moment of crack initiation and the empirical and calculated porosity in micro-scale of sample during deformation (Milenin et al., 2012).

As a result of experimental data processing, which are shown in figure 2, the following coeffi-cients of equations (4) and (5) for alloy Ax30 are received: b0 = 0.02, b1 = 0.43, b2 = 0.30, b3 = -0.50.

4. THE MULTI-SCALE MODELING OF TWO VARIANTS OF DRAWING PROCESS

For the purpose of proposed technique validation two variants of wire drawing process were simulat-ed.

Diameters of wires in variant 1: 0.10.0955 0.09120.0870.08310.07940.0758 (elonga-tion per pass 1.096).

Variant 2: 0.1620.1470.1350.1230.112 (elongation per pass 1.20).

Angle of die in each pass was 50. The drawing speed was 10 mm/s and was chosen in such a way that the annealing could be done in a furnace, which was installed before the device for drawing. All pas-sages in each variants was geometrically similar, so results of stress and strain for all calculated passages are close. For this reason, simulation only first pas-sage for each variant was performed. In figure 4 the results of simulation (triaxility factor) of the first pass for variant 1 (figure 4a) and variant 2 (figure 4b) are shown. The present data shown, that stresses and strains in variants 1 and 2 are significantly dif-ferent. From the point of view of experience in a drawing of magnesium alloys and based on the results of the simulation in macro-scale the variant 2 is preferred because in this case deformation is more homogeneous and value of tensile stresses is lower (Yoshida, 2004). However, this refers to alloys without high propensity to microcracks in the early

stages of deformation. Thus, when the experimental verification of numerical simulation finds that pref-erable to variant 1, this may be the proof of theoreti-cal conclusions about the major impact of micro cracks on technological plasticity.

a)

b)

Fig. 4. Distribution of triaxility factor: a – for variant 1; b – for variant 2

In figure 5 the distribution of strain in the drawing direction and vertical stresses along the centre line of the deformation zone are shown. These parame-ters are used as boundary conditions for the micro-scale simulation of microstructure deformation. Re-sults of simulation in micro-scale are shown in fig-ure 6. As can be seen from the results (figure 6), in the variant 1 the cracks on the grains boundaries did not appear. The maximum value of the parameter D is reached for passage amounted to 0.89. However, in variant 2 there is the emergence of microcracks (figure 6b). This suggests that in this case the duc-tility restoration for alloy after pass will not be pos-sible and the number of passes before the fracture of the wire will be less then in variant 1.

Page 48: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 242 –

a)

b)

c)

d)

Fig. 5. The distribution of strain in the direction of drawing (b, d) and vertical stresses along the centre line of the deformation zone (a, c) for variant 1 (a, b) and variant 2 (c, d)

5. THE EXPERIMENTAL VALIDATION OF RESULTS

The experimental validation of the results of cal-culations was performed in the context described above. The Ax30 alloy was used. As a lubricant sunflower oil was proposed and the temperature of drawing was 30o С. The methodology of receiving

a)

b)

Fig. 6. Results of simulation (effective strain in grains) in micro-scale for variant 1 (a) and for variant 2 (b)

the billet by hot wire drawing process is presented in the work (Milenin & Kustra, 2010). The surface of the workpiece does not contain defects observed on the optical microscope.

In the variant 2 only 4 passage was perform. The hairline fractures on the grains boundaries after pas-sage 2 on the surface of the wire can be observed using an optical microscope. The received wires were fragile and crumble and after 2 pass tie the knot is impossible. Developed network of cracks after 4 pass is shown in figure 7. Further attempts of annealing and drawing were unsuccessful.

Fig. 7. Network of cracks after 4 pass, wire diameter is 0.112 mm, variant 2

Much higher wire quality (figure 8) and mechan-ical properties which allow further drawing were achieve in variant 1. Study of mechanical properties

Page 49: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 243 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

in INSTRON machine showed that the tensile strength Rm of wire for all passages is not signifi-cantly different (diameter 0.0955 mm - Rm = 250.7 MPa, diameter 0.0758 mm - Rm = 252,9 MPa).

Fig. 8. Surface of wire after drawing according variant 1, wire diameter 0.0758 mm

6. CONCLUSIONS

1. The prediction of the microcracks using multi-scale model coincided with the results of the ex-periment. Based on the developed schema of drawing the wire diameter 0.0758 mm for Ax30 alloy by cold drawing could be reached.

2. It is shown that microcracks on grains boundaries have influence on parameters of wire drawing technology of thin wire from Mg-Ca alloys. Acknowledgements. Financial support of the

Ministry of Science and Higher Education project no. 4131/B/T02/2009/37 and project no. 11.11. 110.150 is gratefully acknowledged.

REFERENCES

Crouch, S. L., Starfield, A. M., 1983, Boundary element meth-ods in solid mechanics, GEORGE ALLEN & UNWIN London, Boston, Sydney.

Diard, O., Leclercq, S., Rousselier, G., Cailletaud, G.,2002, Dis-tribution of normal stress at grain boundaries in multicrys-tals: application to an intergranular damage modeling, Computational Materials Science, 18, 73-84.

Haferkamp, H., Kaese, V., Niemeyer, M., Phillip, K., Phan-Tan, T., Heublein, B., Rohde, R., 2001, Exploration of Mag-nesium Alloys as New Material for Implantation; Mat.-wiss. u. Werkstofftech, 32, 116-120.

Heublein, B., Rohde, R., Niemeyer, M., Kaese, V., Hartung, W., Röcken, C., Hausdorf, G., Haverich, A., 1999, Degrada-tion of Magnesium Alloys: A New Principle in Cardio-vascular Implant Technology, 11. Annual Symposium "Transcatheter Cardiovascular Therapeutics", New York.

Kustra, P., Milenin, A., Schaper, M., Grydin, O., 2009, Mul-tiscale modeling and interpretation of tensile test of magnesium alloy in microchamber for the SEM, Com-puter Methods in Materials Science, 2, 207-214.

Milenin, A., 2005, Program komputerowy Drawing2d – narzę-dzie do analizy procesów technologicznych ciągnienia wielostopniowego, Hutnik-Wiadomości Hutnicze, 72, 100-104 (in Polish).

Milenin, A., Byrska, D. J., Grydin, O., Schaper, M., 2010 a, The experimental research and the numerical modeling of the

fracture phenomena in micro scale, Computer Methods in Material Science, 2, 61-68.

Milenin, A., Kustra, P., 2010, Mathematical Model of Warm Drawing Process of Magnesium Alloys in Heated Dies, Steel Research International, 81, spec. ed., 1251–1254.

Milenin, A., Kustra, P., Seitz, J.-M., Bach, Fr.-W., Bormann, D., 2010 b, Production of thin wires of magnesium alloys for surgical applications, Proc. Wire Ass. Int. Inc. Wire & Cable Technical Symposium, eds, M. Murray, Mil-waukee, 61-70.

Milenin, A., Byrska, D. J., Grydin, O., 2011, The multi-scale physical and numerical modeling of fracture phenomena in the MgCa0.8 alloy, Computers and Structures, 89, 1038-1049.

Milenin, A., Byrska-Wójcik, D. J., Grydin, O., Schaper, M., 2012, The physical and numerical modeling of inter-granular fracture in the Mg-Ca alloys during cold plastic deformation, 14th international conference on Metal Forming, eds, Pietrzyk, M., Kusiak, J., Kraków, 863-866.

Rabotnov, Y. N.,1969, Creep Problems in Structural Members, North-Holland Publishing Company, Amster-dam/London.

Seitz, J.-M., Utermohlen, D., Wulf, E., Klose, C., Bach, F.-W.,2011, The Manufacture of Resorbable Suture Mate-rial from Magnesium – Drawing and Stranding of Thin Wires, Advanced Engineering Materials, 13, 1087-1095.

Thomann, M., Krause, Ch., Bormann, D., N. von der Hoh, Windhagen, H., Meyer-Lindenberg, A., 2009, Compari-son of the resorbable magnesium alloys LAE442 and MgCa0.8 concerning their mechanical properties, their progress of degradation and the bone-implant-contact af-ter 12 months implantation duration in a rabbit model, Mat.-wiss. u. Werkstofftech, 40, 82-87.

Yoshida, K., 2004, Cold drawing of magnesium alloy wire and fabrication of microscrews, Steel Grips, 2, 199-202.

WIELOSKALOWE NUMERYCZNE MODELOWANIE ORAZ ANALIZA EKSPERYMENTALNA PROCESU

CIĄGNIENIA NA ZIMNO TRUDNO ODKSZTAŁCALNYCH BIOZGODNYCH STOPÓW

MAGNEZU

Streszczenie Praca poświęcona jest opracowaniu procesu ciągnienia na

zimno cienkich (o średnicy mniejszej niż 0,1mm) drutów z trudno odkształcalnego biozgodnego stopu magnezu Ax30 przy wykorzystaniu wieloskalowego modelu numerycznego. Cechą charakterystyczną stopu Ax30 jest mechanizm pękania po granicach ziaren. Udowodniono eksperymentalnie, że mikro-pęknięcia w trakcie próby rozciągania pojawiają się na długo przed pęknięciem próbki w skali makro. Stan metalu, który bezpośrednio poprzedza pojawienie się mikropęknięć, jest uzna-ny za optymalny pod względem możliwości odzyskania pla-styczności za pomocą wyżarzania. Głównymi celami pracy są symulacja takiego stanu materiału oraz opracowanie procesu ciągnienia na tej podstawie. Rozwiązanie przedstawionego problemu wymaga opracowania modelu pękania stopu w skali mikro, identyfikacji parametrów pękania oraz implementacji modelu w skali mikro do modelu MES procesu ciągnienia. Dwa przypadki procesu ciągnienia zostały zbadane. Pierwszy z nich, zgodnie z wynikami obliczeń, prowadzi do powstania mikro-

Page 50: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 244 –

pęknięć. Drugi rozpatrywany schemat ciągnienia został dobrany tak, by nie pojawiły się mikropęknięcia w ciągnionym drucie. Eksperymentalna weryfikacja wyników obliczeń została prze-prowadzona w warunkach laboratoryjnych w specjalnie do tego celu opracowanym narzędziu. Wyżarzanie było wykonywane przed każdym przepustem. Początkowa średnica drutu wynosiła 0,1 mm. W pierwszym przypadku możliwe było przeprowadze-nie 2-3 przepustów, po których w materiale wystąpiły pęknięcia. W tym przypadku pęknięcia po granicach ziaren były obserwo-wane na powierzchni drutu. Drugi rozważany schemat ciągnie-nia pozwolił na przeprowadzenie 7 przepustów bez pojawienia się pęknięć, otrzymano drut o średnicy 0,075 mm bez defektów na powierzchni o plastyczności pozwalającej na dalsze ciągnie-nie. Tak więc, przeprowadzono walidację modelu na dwóch zasadniczo różnych przypadkach procesu ciągnienia.

Received: September 22, 2012 Received in a revised form: October 29, 2012

Accepted: November 9, 2012

Page 51: CMMS_nr_2_2013

245 – 250 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

THREE-DIMENSIONAL ADAPTIVE ALGORITHM FOR CONTINUOUS APPROXIMATIONS OF MATERIAL DATA USING

SPACE PROJECTION

PIOTR GURGUL*, MARCIN SIENIEK, MACIEJ PASZYŃSKI, ŁUKASZ MADEJ

AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Krakow, Poland *Corresponding author: [email protected]

Abstract

The concept of the H1 projections for an adaptive generation of a continuous approximation of an input 3D image in the finite element (FE) framework is describe and utilized in this paper. Such an approximation, along with a correspond-ing FE mesh, can be interpreted and used as an input data for FE solvers. In order to capture FE solution gradients properly, specific refined meshes have to be created. The projection operator is applied iteratively on a series of increas-ingly refined meshes, resulting in an improving fidelity of the approximation. A developed algorithm for linking image processing to the 3D FEM code is also presented within the paper. In particular we compare hp-adaptive algorithm with h-adaptivity, concluding that hp-adaptivity for three dimensional approximation of non-continuous data loses its exponen-tial convergence. Finally conclusions with the evaluation and discussion of the numerical results for an exemplary prob-lem and convergence rates obtained for described problem are described. Key words: adaptive finite element method, space projections, digital material representation

1. INTRODUCTION

Space projections constitute an important tool, which can be used in diverse applications including finite element (FE) analysis (Demkowicz, 2004; Demkowicz & Buffa, 2004; Demkowicz et. al., 2006). It might be used, for example, to create an approximation of a generic bitmap in the finite ele-ment space. Such bitmaps can represent e.g. mor-phology of the digital material representation (DMR) during FE analysis of material behavior un-der deformation and exploitation conditions (Madej et al., 2011; Madej, 2010; Paszyński et al., 2005). Due to the crystallographic nature of polycrystalline material, particular features are characterized by different properties that significantly influence mate-rial deformation. To properly capture FE solution gradients which are the results of mentioned material

inhomogeneities, specific refined meshes have to be created.

The operator can be applied iteratively on a se-ries of increasingly refined meshes, resulting in an improving fidelity of the approximation. A proof of concept for a limited set of applications has been presented in earlier author’s works: (Gurgul et al., 2011; Sieniek et al., 2010; Gurgul et al., 2012).

The continuous data approximation, namely the H1 projection, is necessary in case of a non-continuous input data representing continuous phe-nomena. Some examples may involve: satellite images of topography of the terrain,

when we have a non-continuous bitmap data representing rather continuous terrain;

input data obtained by using various techniques representing temperature distribution over the

Page 52: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 246 –

material, where the temperature is rather contin-uous phenomena. The exemplary application of the first case may

involve the flood modeling (Collier et al., 2011), the application of the second case concerns the solution of time dependent problems with input data repre-senting initial conditions. When we solve the non-stationary problems of the form - u = f, where u represents temperature, with initial condi-tions u(x,0)=u0, where u0 is represented by a non-continuous input data, it is rather necessary to per-form H1 projection of the u0 to get the required regu-larity of u.

A number of adaptive algorithms for finite ele-ment mesh refinements are known. HP-adaptation is one of the most complex and accurate, as it results in an exponential convergence with the number of de-grees of freedom (Demkowicz et al., 2006). The hp-adaptation process breaks selected finite elements into smaller ones and modifies the polynomial order of approximation locally. H-adaptive algorithm re-stricts the mesh refinement process to breaking se-lected finite elements with the fixed polynomial order of approximation, and it results in algebraic convergence only. In this work we compare hp-adaptivity with h-adaptivity for the H1 projection of non-continuous data. We conclude that for three dimensional H1 projection of continuous data the hp-adaptivity loses its exponential convergence and thus h-adaptation is enough.

2. PROJECTION OPERATOR

A projection onto the space V may be ex-pressed as the following minimization problem: Given an arbitrary function , find ∈ such that is minimal.

Since ∑ where .. are basis functions for V (i.e. ( …

), we have to determine .. , the coefficients of this linear combination.

Given ‖f x ‖ Ω, to find the min-imum, we differentiate the equation with respect to the coefficients and compare them to zero in equa-tion (1):

∑ Ω 0 (1)

This leads to a linear system (2):

MU F (2)

where:

, Ω (3)

Ω (4)

An projection onto the space V, consti-tutes the solution to this system.

However, the method above considers only the function itself for minimization of the error. We can include information about derivatives and though minimize not only the error of function's value, but also its gradients. This method is called projec-tion and can be expressed very similarly to pro-jection.

Given an arbitrary function , find ∈ such that

. (5)

Thus the equation (6)

f x u Ω α f x u Ω. (6)

needs to be minimal. Since the material data is not continuous in our

case, we need to approximate the partial derivatives in the gradient f by finite differences. For a given X = (x1, x2, x3), we find the closest existing (integer) coordinates for x1, x2, x3 that are produced by the function r(x). Then, we compute the approximation of , compare equation (7).

; ; (7)

3. ADAPTIVE ALGORITHM USED FOR SOLUTION

Quality of the approximation depends on the choice of the space , where the approximation will be performed. There is no efficient way to determine precision of a given a priori and a workaround here is to refine space V iteratively, based on rela-tive error rate in each step. This is done using self-containing spaces ,…, where corresponds to the initial mesh and is the first mesh, for which the desired precision is achieved. Let: Vu be a solution in the space V,

Page 53: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 247 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

1: ← solve the minimization problem in

2: ←

3: ← solve the minimization problem in

4: _ ← 0

5: foreach element in coarse mesh do

6: ←

7: foreach ⊂ ⊆ over element

such that | | | | 0 do

8: ← compute a projection of onto

9:

, ← | |

10: if , , then

11: ←

12: end if

13: end for

14: add all basis functions from with supports

on to

15: if , _

16: max_ ← ,

17: end if

18: end for

19: return , _

Fig. 1. Choice of an optimal mesh for the following iteration of the adaptive algorithm.

finetV be a space corresponding to a mesh, where

all elements have been refined by one order with respect to tV ,

opttV be a space corresponding to a mesh, where

element refinements have been optimally chosen by comparing fine

tV and tV ,

wtV be any space such that fine

tw

tt VVV . Major steps of described algorithm are presented

in figure 1. Algorithm presented in figure 1 is being per-

formed in iterations until the stop condition (usually the desired precision) is met. These steps are pre-sented in figure 2.

1: ← ∞

2: ← 1

3: ← initial space corresponding to a trivial mesh

4: while > _ do

5: , ← perform listing 1 on

6: ←

7: ← t + 1

8: end while

9: return

Fig. 2. Algorithm approximating space with the _ .

3.1. HP Mesh refinements and its role in pro-jection-based interpolation

The quality of the interpolation can be improved

by the expansion of the interpolation base. In FEM terms, this could be done thanks to some kind of mesh adaptation. Two methods of adaptation are being considered in the present work:

3.1.2. P-adaptation – increasing polynomial ap-

proximation level. One approach is to increase order of the basis functions on the elements where the error rate is higher than desired. More functions in the base means smoother and more accurate solution but also more computations and the use of high-order polynomials.

Page 54: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 248 –

3.1.3. H-adaptation – refining the mesh. Another way is to split the element into smaller ones in order to obtain finer mesh. This idea arose from the obser-vation that the domain is usually non-uniform and in order to approximate the solution fairly some places require more precise computations than others, where the acceptable solution can be achieved using small number of elements. The crucial factor in achieving optimal results is to decide if a given ele-ment should be split into two parts horizontally, into two parts vertically, into four parts (both horizontal-ly and vertically on one side), into eight parts (both horizontally and vertically on the both sides) or not split at all. That is why the automated algorithm that decides after each iteration for the element if it needs h- or p-refinement or not was developed. The re-finement process is fairly simple in 1D but the 2D and 3D cases enforce a few refinement rules to fol-low.

3.1.4. Automated hp-adaptation algorithm

Neither the p- nor the h-adaptation guarantees

error rate decreases in an exponential manner with a step number. This can be achieved by combining together mentioned two methods under some condi-tions, which are not necessarily satisfied in the pre-sent case. Still, in order to locate the most sensitive areas at each stage dynamically, and improve the solution as much as possible, the self-adaptive algo-rithm can be applied. It decides if a given element should be refined or it is already properly refined for the satisfactory interpolation, in an analogical man-ner to the algorithm for Finite Elements adaptivity described by (Demkowicz et al., 2006).

4. NUMERICAL RESULTS

Presented projection algorithm was tested on one three dimensional example, with hp-adaptive (see figure 3) and h-adaptive (see figure 4) algorithms. The example concerns the approximation of the input data representing the ball shape distribution of data. This may represent the initial distribution of temperature over one ball shape material inside an-other material. This temperature distribution may constitute the starting point for some non-stationary time dependent heat transfer simulation.

The numerical results presented in table 1 ob-tained by the hp-adaptive solution show that the algorithm utilized for the three dimensional H1 pro-jections does not deliver exponential convergence, and thus it is reasonable to replace it with its cheap-er, h-adaptivite counterpart that delivers similar con-vergence to the one presented in table 2, with sim-pler implementation and longer execution time.

Table 1. Convergence rate for the problem of H1 projections of 3D balls with hp-adaptivity.

Iteration Mesh size Relative error in H1 norm

1 125 71.3

2 2197 66.3

3 5197 62.4

4 12093 63.9

5 22145 57.7

6 41411 51.03

Fig. 3. 3D balls problem: mesh after the sixth iterations of hp adaptive algorithm and solution over the mesh.

Page 55: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 249 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Table 2. Convergence rate for the problem of H1 projections for 3D balls with h-adaptivity.

Iteration Mesh size Relative error in H1 norm

1 125 72.1

2 729 68.3

3 4913 68.2

4 11745 62.1

5 32305 52.9

6 68257 49.44

5. CONCLUSIONS AND FUTURE WORK

This paper presents a way of incorporating well-established H1 projection concept into an adaptive algorithm used to prepare material data. The de-scribed method allows for generation of a smooth, continuous interpolation of given arbitrary data, alongside with an initial pre-adapted mesh suitable for further processing by a non-stationary FE solver.

In this paper we compared three dimensional hp-adaptivity with h-adaptivity, concluding that the hp-adaptive algorithm does not deliver the exponential convergence in the case of non-continuous approxi-mation of data, and it may be reasonable to utilize just h-adaptation algorithm, with uniform polynomi-al order of approximation, which is significantly easier to implement.

It is desirable to experiment with more 3D imag-es as well as with various input parameters (e.g. boundary conditions or image conversion algo-rithms). Besides, more sophisticated digital material representations should be investigated . The applica-bility of this methodology for non-stationary finite element method solvers will be tested in our future work. In particular we plan to test the influence of

the quality of projection of the initial state to the further stability of the non-stationary simulation.

Acknowledgements. The work of the first au-

thor was partly supported by The European Union by means of European Social Fund, PO KL Priority IV: Higher Education and Research, Activity 4.1: Improvement and Development of Didactic Potential of the University and Increasing Number of Students of the Faculties Crucial for the National Economy Based on Knowledge, Subactivity 4.1.1: Improve-ment of the Didactic Potential of the AGH Universi-ty of Science and Technology ``Human Assets'', No. UDA – POKL.04.01.01-00-367/08-00. The work of the second author was supported by Polish National Science Center grants no. DEC-2011/03/N/ST6/ 01397. The work of the third author was supported by Polish National Science Center grant no. NN519447739. The work of the fourth author was supported by grant no. nr 820/N-Czechy 2010/0.

REFERENCES

Collier, N., Radwan, H., Dalcin, L., 2011, Time Adaptivity in the Diffusive Wave Approximation to the Shallow Wa-ter Equations, accepted to Journal of Computational Science.

Demkowicz, L., Kurtz, J., Pardo, D., Paszyński, M., Zdunek, A., 2006, Computing With Hp-adaptive Finite Elements, CRC Press, Taylor and Francis.

Demkowicz, L., Buffa, A., 2004, H1, H(curl) and H(div) – con-forming projection-based interpolation in three dimen-sions, ICES-Report 04-24, The University of Texas in Austin.

Demkowicz, L., 2004, Projection-based interpolation, ICES Report 04-03, The University of Texas in Austin.

Gurgul, P., Sieniek, M., Magiera, K., Skotniczny, M., 2011, Application of multi-agent paradigm to hp-adaptive pro-jection-based interpolation operator, accepted to Journal of Computational Science.

Gurgul, P., Sieniek, M., Paszyński, M., Madej, Ł., Collier, N., 2012, Two dimensional hp-adaptive algorithm for con-

Fig. 4. 3D balls problem: mesh after the sixth iterations of the h-adaptive algorithm and solution over the mesh.

Page 56: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 250 –

tinuous approximations of material data using space pro-jections; accepted to Computer Science.

Madej, L., 2010, Development of the modeling strategy for the strain localization simulation based on the Digital Mate-rial Representation, DSc dissertation, AGH University Press, Krakow.

Madej, L., Rauch, L., Perzyński, K., Cybułka P., 2011, Digital Material Representation as

an efficient tool for strain inhomogeneities analysis at the micro scale level, Archives of Civil and Mechanical Engineer-ing, 11, 661-679.

Paszyński, M., Romkes, A., Collister, E., Meiring, J., Dem-kowicz, L., Wilson, C.G., 2005, One the modeling of Step-and-Flash imprint lithography using molecular stat-ic models, ICES-Report 05-38.

Perzyński, K., Major, Ł., Madej, Ł., Pietrzyk, M., 2011, Analy-sis of the stress concentration in the nanomultilayer coat-ings based on digital Representation of the structure, Ar-chives of Metallurgy and Materials, 56, 393-399.

Sieniek, M., Gurgul, P., Kołodziejczyk, P., Paszyński, M., 2010, Agent-based parallel system for numerical computa-tions, Procedia Computer Science, 1, 1971-1981.

TRÓJWYMIAROWY, ADAPTACYJNY ALGORYTM DO

APROKSYMACJI CIĄGŁYCH DANYCH MATERIAŁOWYCH Z WYKORZYSTANIEM

PROJEKCJI PRZESTRZENNYCH

Streszczenie Celem niniejszego artykułu jest opis i pokazanie praktycznego

wykorzystana koncepcji projekcji H1 do adaptacyjnej generacji aproksymacji ciągłej wejściowego obrazu w 3D w bazie elemen-tów skończonych. Taka aproksymacja, razem z odpowiadającą jej siatką, może być interpretowana jako ciągła reprezentacja danych wejściowych dla solwerów metody elementów skończonych (MES).

Artykuł przedstawia teoretyczne podstawy mechanizmu pro-jekcji wraz z porównaniem algorytmów hp- adaptacji i h-adaptacji użytych do iteracyjnego generowania kolejnych aproksymacji. Omówiony został również sposób oszacowania i redukcji błędu aproksymacji. Przedstawiony został ponadto przykład oblicze-niowy ilustrujący działanie opisywanych metod.

Received: September 26, 2012 Received in a revised form: October 23, 2012

Accepted: October 26, 2012

Page 57: CMMS_nr_2_2013

251 – 257 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

PARALLEL IDENTIFICATION OF VOIDS IN A MICROSTRUCTURE USING THE BOUNDARY ELEMENT

METHOD AND THE BIOINSPIRED ALGORITHM

WACŁAW KUŚ*, RADOSŁAW GÓRSKI

Silesian University of Technology, Institute of Computational Mechanics and Engineering, Konarskiego 18A, 44-100 Gliwice, Poland

*Corresponding author: [email protected]

Abstract

The problem of identification of the size of a void in a microscale on the basis of the homogenized material parame-ters is studied in this work. A three-dimensional unit-cell model of a porous microstructure is modelled and analyzed by the boundary element method (BEM). The method is very accurate and for the considered problem requires discretization only the outer boundary of models. The algorithm used for identification is characterized by a hierarchical structure which allows for parallel computing on three different levels. The parallel algorithm is used for evolutionary computations. The solution of boundary value problems by the BEM and the determination of effective material properties by numerical ho-mogenization method are also parallelized. The computation of the compliance matrix for a porous microstructure is shown. The matrix is used to formulate the objective function in identification problem in which the size of a void is searched. The scalability tests of the algorithm are performed using a server consisting of eight floating point units. As a result of using the hierarchical structure of the identification algorithm and the BEM, a significant computation speedup and the accuracy are achieved. Key words: parallel computing, bioinspired algorithms, identification, boundary element method, micromechanics, numerical homogenization

1. INTRODUCTION

The bioinspired algorithms are very efficient op-timization tools for single and multimodal objective functional problems (Michalewicz, 1996). The main drawback of these algorithms is a large number (hundreds or thousands) of objective function evalu-ations. The time needed for an evaluation of a single objective function depends on a boundary value problem usually solved by numerical methods, like the finite element method (FEM) or the boundary element method (BEM). The overall wall time of identification can be shortened when the parallel algorithms are used (Kuś & Burczyński, 2008).

Apart from the analytical models and experi-mental testing, numerical simulations play today an important role in the prediction of a behaviour of new materials of a complex structure. A recent in-crease in a computational power gives a possibility of studying different materials using a numerical homogenization approach. Since the direct model-ling and analysis of most of engineering structures made of heterogeneous materials is computationally very demanding, the numerical homogenization methods can be performed instead. By using this technique, a complex microstructure may be repre-sented for instance by means of a representative volume element (RVE) or a unit cell and can be

Page 58: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 252 –

modelled and analyzed on two or more different scales.

Because the main emphasis in this work is put on parallel bioinspired computations assisting nu-merical homogenization, analytical homogenization procedures are out of the scope of this paper and will not be discussed. Among the numerical homogeniza-tion methods, the FEM and the BEM are the most frequently used. The studies in the literature concern the homogenization of different materials of com-plex microstructures, for example composite materi-als, cellular materials, heterogeneous tissue scaffolds and other. Fang et al. (2005) for instance have stud-ied homogenization of porous tissue scaffolds by the FEM and by two other approaches. They computed the effective mechanical properties of different scaf-folds materials and pore shapes. Düster et al. (2012) have shown a new approach for the numerical ho-mogenization of heterogeneous and cellular materi-als using the finite cell method. The important fea-ture of the method is a possibility of discretizing of complicated microstructures in a fully automatic way. Chen and Liu (2005) have analysed composites reinforced by spherical particles or short fibres by the advanced BEM. Difficulties in dealing with nearly-singular integrals during modelling of com-posites with closely packed fillers have been re-solved by new and improved techniques. Araujo et al. (2010) have modelled and analysed three-dimensional composite microstructures by the BEM and the parallel algorithm. Multiscale analysis by coupling the molecular statics and the BEM is pre-sented by Burczyński et al. (2010a). Optimization of macro models analyzed by the FEM using parallel algorithms is shown by Kuś and Burczyński (2008). Identification of material parameters of a bone by using a multiscale modelling and a distributed paral-lel evolutionary algorithm is presented by Bur-czyński et al. (2010b).

In the present paper the identification of voids in microstructures modelled by the BEM is presented. In order to speed up the computations, the identifica-tion problem is solved by the parallel hierarchical algorithm. In order to solve the problem, the devel-oped system is build of three programs, i.e. an evo-lutionary algorithm (an optimization tool), a compu-tational homogenization module (evaluation of ob-jective function) and the BEM program (a boundary value problem solver). Three-dimensional unit-cell models (which play the role of a RVE) of micro-structures with voids are considered. The numerical homogenization of an orthotropic material is shown

for which the homogenized properties are deter-mined. The properties are used to formulate the ob-jective function depending on the quantities of a macro and micro model in order to identify the size of a void in the unit cell model of the material. As a result, the parameters defining voids in the material on a microscale are determined on the basis of orthotropic parameters in a macroscale.

2. COMPUTATIONAL HOMOGENIZATION BASED ON THE BEM

In this section, the idea of a numerical homoge-nization is described within the framework of a line-ar elastic material characterized by a periodic micro-structure containing voids. It is assumed that the material is macroscopically orthotropic and that a macro model of a structure made of this material is subjected to small deformations. The porous micro-structure is modelled and analyzed by the BEM. Boundary integral equations for a general three-dimensional (3D) isotropic body are shown. The stress-strain relationships for an orthotropic material are presented. As a result of the numerical homoge-nization by the BEM, the macroscopic homogenized properties of the material are determined on the ba-sis of analysis of the unit cell models in a micro scale.

First, consider a 3D body (a macro model) made of a homogeneous, isotropic and linear elastic mate-rial. The external boundary of the body is denoted by . The body is statically loaded along the bound-ary by boundary tractions tj, displacements of the body are denoted by uj.

Assuming that the body forces does not act on the body, the relation between the loading of the body and its displacements can be expressed by the boundary integral equation (known as the Somigliana identity) in the following form (Gao & Davies, 2002):

(1)

where x’ is a collocation point, for which the above integral equation is applied, x is a point on the exter-nal boundary , a constant cij depends on the posi-tion of the point x’, Uij and Tij are fundamental solu-tions of elastostatics. The summation convention is used in the equation (the indices for a 3D problem are i,j = 1,2,3).

dij j ij jc x u x T x ,x u x x

dij jU x ,x t x x

Page 59: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 253 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Numerical BEM equations are obtained after discretization of the boundary integral equation (1), which is successively applied for all collocation points. In the developed computer program the outer boundary of the body is divided into 8-node quadrat-ic boundary elements. Along the external boundary the variations of coordinates, displacements and tractions are interpolated using quadratic shape func-tions. The resulting BEM equations can be ex-pressed in the following matrix form:

H u G t (2)

where u and t are displacement and traction vectors, respectively, H and G are coefficient matrices de-pendent on the boundary integrals of fundamental solutions and shape functions, and their elements are integrated numerically using the Gauss quadratures.

Consider now a heterogeneous material with a periodic microstructure with voids in the form of rectangular prisms. A unit cell of this material (a micro model) representing its porous microstructure contains a single rectangular prism of arbitrary side lengths (i.e. a1, a2, a3) as shown in figure 1. Because there are three mutually perpendicular symmetry planes with respect to the void aligned along the x1, x2 and x3 axes, the material in a macro scale is re-ferred to as orthotropic.

Fig. 1. A unit cell model of an orthotropic material

If a material has a non-regular and non-uniform

microstructure, the representative volume elements (RVE) representing a microstructure of this material should be rather used than the unit cell models. More comprehensive definitions of the RVE can be found elsewhere, for instance in Kouznetsova (2002). In the RVE or the unit cell analysis, repre-sentative sections (volumes) of a material are ana-lyzed in order to calculate the homogenized proper-ties. The coupling of the macro and micro levels is based on the averaging theorems. Thus, the relation between strains and stresses is formulated in an av-

erage sense in order to determine the homogenized (the effective) macroscopic properties.

The mechanical properties of a linear elastic ma-terial are characterized by the compliance matrix S or by the stiffness matrix C. The means for obtain-ing the elements of the compliance matrix S for an orthotropic material by using the numerical homog-enization concept and the BEM are presented below.

Using the engineering notation, the strain-stress relationships for an orthotropic material have the following form (Kollár & Springer, 2003):

11 12 131 1

12 22 232 2

13 23 333 3

4423 23

5513 13

6612 12

S S S 0 0 0ε σS S S 0 0 0ε σS S S 0 0 0ε σ0 0 0 S 0 0γ τ0 0 0 0 S 0γ τ0 0 0 0 0 Sγ τ

(3)

where the compliance matrix S has 12 nonzero elements, but only 9 are independent, 1, 2, 3 and 23, 13, 12 are engineering strains, 1, 2, 3 and 23, 13, 12 are engineering stresses. The walls of the unit cell in figure 1 align with the x1, x2 and x3 axes, in which the strains and stresses in equation (3) are defined. For the con-sidered orthotropic material in a macro scale, the compliance matrix is specified in the coor-dinate system defined by these axes, which are perpendicular to the three symmetry planes. The compliance matrix S is symmetrical for an elas-tic material (Sij = Sji) and it is the inverse of the stiffness matrix (S = C-1).

In order to compute the elements of the compli-ance matrix, 6 numerical tests are performed using the unit cell in figure 1, i.e. 3 tensile tests and 3 shear tests. In this work, homogeneous static bound-ary conditions are applied. The unit tractions are prescribed to the unit cell models. For instance, for the tensile test in the x1 direction only the traction in this direction (1 stress) is prescribed and the re-maining are zero. When the first stress state is ap-plied, then the resulting strains are obtained from the strain-stress relationships for an orthotropic material and the first column of the compliance matrix in equation (3) is determined. Repeating an analysis five more times for the remaining unit stress vectors allows determining all columns of the compliance matrix. In order to determine the homogenized mac-roscopic properties represented by this matrix, the

Page 60: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 254 –

relation between strains and stresses is formulated in an average sense. The average strains are computed on the basis of displacements obtained from the BEM analysis by their integration over the bounda-ries of the models. These strains provide the relevant terms in the compliance matrix S and thus the effec-tive properties.

3. FORMULATION OF IDENTIFICATION PROBLEM

The identification problem consists in finding the side lengths a1, a2, a3 of the void in the unit cell model in figure 1 by minimization the following functional F dependent on the elements of the com-pliance matrix:

1

ˆn

i ii

F s s

(4)

where si are computed homogenized material prop-erties, i.e. the elements of the compliance matrix si =S11, S22, S33, S12, S13, S23, S44, S55, S66, is are ref-erence homogenized material properties related to a macromodel (e.g. from experiments), n is a num-ber of independent material parameters for an ortho-tropic material (n = 9 in this case).

The identification problem is solved by the evo-lutionary algorithm, in which a population of chro-mosomes are processed in each iteration. The design variables (the side lengths a1, a2, a3) are coded in genes of each chromosome which is a potential solu-tion of the problem. At the beginning, the initial population of chromosomes is generated randomly. Then the values of the objective function (fitness function) for all chromosomes are calculated. The fitness function defined by equation (4) is obtained by solving six boundary value problems with the use of the BEM and the homogenization procedure. In the next step, the randomly chosen chromosomes and their genes are modified by using evolutionary operators. The new generation is created on the basis of the offspring population during the selection pro-cess. The loop of the algorithm is repeated until the end condition is fulfilled (expressed e.g. as a maxi-mum number of iterations).

In order to improve the evolutionary process of the algorithm and speed up the computations, the island (also called distributed) version of the evolu-tionary algorithm is proposed in the present work. It uses few subpopulations of chromosomes which evolve separately. The chromosomes between sub-populations can be exchanged during a migration

phase. Another improvement concerns the evalua-tion of fitness function. The developed algorithm uses a database containing an information about evaluated chromosomes and their fitness function. It prevents from the evaluation of fitness function for chromosomes which have the same genes. If this is the case, the value from the database is used. The procedure saves much time because solving of a boundary value problem is usually the most expen-sive operation in terms of time during the evolution-ary process.

Identification problems belong to a class of ill-defined problems and the uniqueness of the solution is not guaranteed. The same value of the objective function may be obtained for different number and other parameters of voids.

4. PARALLELIZATION OF IDENTIFICATION ALGORITHM

The aim of parallelization of the identification algorithm is to obtain the results as fast as possible. Two factors are taken into account in the paralleliza-tion strategy: wall time of computations and memory consumption. The memory usage by the algorithm is important because the methods used in the paper increase memory requirements. The physical memory installed in a server should be taken into account during parallelization of the algorithm to prevent swapping memory to disk which may lead to much longer wall time of computations. In the con-sidered process of identification solving of a bound-ary value problem is the most time consuming task. The parallelization of the identification algorithm can be performed on at least three levels as shown in figure 2. On each level different program of the de-veloped system consisting of three modules is ap-plied, i.e. on the first level the evolutionary algo-rithm, on the second the computational homogeniza-tion procedure and on the third the BEM program for the solution of a boundary value problem (paral-lel system of equations solver - PSS). The parameter nLx is a number of threads used by a program on level x. The parallelization is hierarchical and the total number of parallel threads is equal to the multi-plication of the parameters nLx for all three levels.

The parallelization of the evolutionary algo-rithms is quite easy (Kuś & Burczyński, 2008). The efficiency of using a parallel algorithm is high espe-cially for problems for which evaluation of a fitness function is long (from seconds to hours or in some cases days). The maximum number of parallel

Page 61: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 255 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

threads nL1 is equal to the total number of chromo-somes. The parallel evolutionary algorithms that use the floating point representation operate on small populations of chromosomes, for example 10, 20, 50. The parallelization increases the memory re-quirements for computations. The memory amount is a sum of memory requirements for the evaluation of a fitness function for each chromosome. The memory consumption is nL1 times larger than in the case of a sequential algorithm. The parallelization on level 2 is related with the parallel computational homogenization. The homogenization procedure consists of boundary value problems solved in a parallel way and sequential algorithm which com-putes homogenized material properties. The maxi-mum number of parallel tasks is 6 for a 3D problem. The homogenization procedure runs 6 BEM anal-yses therefore the number of parallel tasks should be 1, 2, 3 or 6 in order to use all cores equally. The boundary value problem is solved with the use of the BEM on third level of parallelization. Several steps of the BEM algorithm can be parallelized. The most important is parallelization of solving of the system of equations. In the BEM the full matrices are creat-ed thus the standard algorithms like LAPACK can be used in order to solve a system of algebraic nu-merical equations. The parallel approach is realized with the use of an Intel MKL library.

Table 1. Homogenized properties of the reference material

5. NUMERICAL TESTS

A geometry of the considered RVE is presented in figure 1. The unit cell size is 111 mm. The constraints on design variables, i.e. the side lengths a1, a2, a3, are imposed and each is within the range

of 0.05 to 0.85 mm. For each test the prescribed traction to a wall of the unit cell is p = 1 MPa. The linear elastic material properties of the microstruc-ture are as follows: Young’s modulus E = 1.0 GPa and Poisson’s ratio = 0.3. Each outer wall of the unit cell and inner wall of the void is divided into 16 quadratic boundary elements, resulting in 192 ele-ments for the whole model. The orthotropic proper-ties of the reference material are shown in table 1. The elements of the reference compliance matrix were obtained for an actual void and its side lengths shown in table 2.

The parameters of the evolutionary algorithm are as follows: number of genes is 3, number of chro-mosomes is 20, number of iterations is 50, probabil-ity of simple crossover with Gaussian mutation is 90%, probability of uniform mutation is 10%. The results of identification and an error with respect to the actual void are shown in table 2. The corre-sponding value of the objective function is F = 0.027 GPa-1.

Table 2. Actual and found void side lengths

Void parameter Actual Found Error %

a1 0.200 0.217 8.5

a2 0.400 0.346 13.5

a3 0.300 0.319 6.3

The tests were performed with the use of a serv-

er Dell PowerEdge R515. The server contains two processors AMD Opteron 6272, each with 16 cores (8 floating point units). In all tests, parameters of the evolutionary algorithm are the same as in the previ-

Fig. 2. A hierarchical parallel structure of identification algorithm

Material parameter S11 S12 S13 S22 S23 S33 S44 S55 S66

Value [GPa-1]

1.076 -0.318 -0.319 1.050 -0.317 1.057 2.729 2.780 2.763

Page 62: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 256 –

ous example. The maximum value of the nL1 pa-rameter is 18 (which corresponds to a maximal number of chromosomes evaluated in each iteration of the evolutionary algorithm). The times of identifi-cation for all tests are presented in table 3. The speedup is computed in a reference to results for the test 1. The parallelization on the level 3 (tests 1-6) is not efficient due to a partial parallelization of the BEM algorithm. The parallel evolutionary algorithm and the homogenization procedure are characterized by a similar efficiency (tests 7-16). The maximum number of parallel evolutionary algorithm threads is 18, the homogenization allows 6 parallel threads, the total number of cores which may be used in compu-tations is equal to 108. In the future the authors plan to use a cluster with more number of cores to check the scalability of the presented approach.

Table 3. Times of identification for different number of tasks

Test nL1 nL2 nL3 Number of threads

Time [s] Speedup

1 1 1 1 1 21 220 1

2 1 1 2 2 18 136 1.17

3 1 1 4 4 17 983 1.18

4 1 1 8 8 17 112 1.24

5 1 1 16 16 16 449 1.29

6 1 2 1 2 11 937 1.78

7 2 1 1 2 12 039 1.76

8 2 2 1 4 6 861 3.09

9 1 6 1 6 5 975 3.55

10 6 1 1 6 5 802 3.66

11 2 6 1 12 3 426 6.19

12 16 1 1 16 3 213 6.60

13 8 2 1 16 2 806 7.56

15 3 6 1 18 2 549 8.32

16 6 3 1 18 2 539 8.36

6. CONCLUSIONS

The identification of the size of a pore in a micro scale model on the basis of parameters in a macro scale is considered in this work. A three-dimensional unit-cell model of a porous material is modelled and analyzed by the boundary element method (BEM). The main advantage of using the BEM in analysis is its high accuracy and that it requires discretization only the outer boundary of the considered models. The advantages are valuable and can be exploited in more complex problems dealing for instance with numerical homogenization and optimization or iden-

tification. In order to solve the problem, the hierar-chical parallelization of the algorithms was devel-oped. In numerical examples, parameters defining geometry of a void were successfully identified. The results of numerical tests with wall time measure-ments for different number of cores are shown. The time of computations is reduced from about 6 hours to about 40 minutes when one core and the parallel approach is applied, respectively.

Acknowledgements. The scientific research has

been financed by the Ministry of Science and Higher Education of Poland in years 2010-2012.

REFERENCES

Arújo, F.C., d’Azevedo, E.F., Gray, L.J., 2010, Boundary-element parallel-computing algorithm for the micro-structural analysis of general composites, Comput. Struct., 88, 773-784.

Burczyński, T., Mrozek A., Górski R., Kuś, W., 2010a, Molecu-lar statics coupled with the subregion boundary element method in multiscale analysis, Int. J. Multiscale Comput. Eng., 8, 319-331.

Burczyński, T., Kuś, W., Brodacka, A., 2010b, Multiscale mod-eling of osseous tissues, J. Theor. Appl. Mech., 48, 855-870.

Chen, X.L., Liu, Y.J., 2005, An advanced 3D boundary element method for characterizations of composite materials, Eng. Anal. Bound. Elem., 29, 513-523.

Düster, A., Sehlhorst, H.G., Rank, E., 2012, Numerical homog-enization of heterogeneous and cellular materials utiliz-ing the finite cell method, Comput. Mech., 50, 413-431.

Fang, Z., Yan, C., Sun, W., Shokoufandeh, A., Regli, W., 2005, Homogenization of heterogeneous tissue scaffold: A comparison of mechanics, asymptotic homogenization, and finite element approach, ABBI, 2, 17-29.

Gao, X.W., Davies, T.G., 2002, Boundary element program-ming in mechanics, Cambridge University Press, Cam-bridge.

Kollár, L.P., Springer, G.S., 2003, Mechanics of composite structures, Cambridge University Press, Cambridge New York.

Kouznetsova, V., 2002, Computational homogenization for the multi-scale analysis of multi-phase materials, PhD the-sis, Technishe Universiteit Eindhoven, Eindhoven.

Kuś, W., Burczyński, T., 2008, Parallel Bioinspired Algorithms in Optimization of Structures, eds, Wyrzykowski, R., Dongarra, J., Karczewski, K., Wasniewski, J., PPAM 2007, LNCS, 4967, 1285-1292.

Michalewicz, Z., 1996, Genetic algorithms + data structures = evolutionary algorithms. Springer-Verlag, Berlin.

Page 63: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 257 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

RÓWNOLEGŁA IDENTYFIKACJA PUSTEK

W MIKROSTRUKTURZE Z WYKORZYSTANIEM METODY ELEMENTÓW BRZEGOWYCH ORAZ

ALGORYTMU INSPIROWANEGO BIOLOGICZNIE

Streszczenie W pracy przedstawiono zagadnienie identyfikacji rozmiaru

pustki w skali mikro, na podstawie zhomogenizowanych para-metrów materiałowych. Trójwymiarowy model komórki jed-nostkowej mikrostruktury porowatej modelowany i analizowany jest metodą elementów brzegowych (MEB). Metoda jest bardzo dokładna i dla rozważanego zadania wymaga jedynie dyskrety-zacji zewnętrznego brzegu modeli. Zastosowany algorytm do identyfikacji charakteryzuje się hierarchiczną budową pozwala-jącą prowadzić obliczenia w sposób równoległy na trzech róż-nych poziomach. Wykorzystano równoległy algorytm do obli-czeń ewolucyjnych. Zrównoleglono także rozwiązywanie zadań brzegowych za pomocą MEB oraz wyznaczanie zastępczych własności materiałowych metodą numerycznej homogenizacji. Pokazano sposób wyznaczania macierzy podatności mikrostruk-tury porowatej. Macierz jest wykorzystana do sformułowania funkcji celu w zagadnieniu identyfikacji, w którym poszukiwa-ny jest rozmiar pustki. Przeprowadzono testy skalowalności algorytmu z użyciem serwera zawierającego osiem jednostek zmiennoprzecinkowych. Jako rezultat zastosowania algorytmu o budowie hierarchicznej oraz MEB uzyskano znaczne przy-śpieszenie i dokładność obliczeń.

Received: October 11, 2012 Received in a revised form: October 22, 2012

Accepted: October 29, 2012

Page 64: CMMS_nr_2_2013

258 – 263 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

APPLICATION OF THE THREE DIMENSIONAL DIGITAL MATERIAL REPRESENTATION APPROACH TO MODEL

MICROSTRUCTURE INHOMOGENEITY DURING PROCESSES INVOLVING STRAIN PATH CHANGES

KRZYSZTOF MUSZKA*, ŁUKASZ MADEJ

AGH University of Science and Technology, Faculty of Metals Engineering and Industrial Computer Science, Mickiewicza 30, 30-059 Kraków, Poland

*Corresponding author: [email protected]

Abstract

The present paper discusses possibilities of application of the 3D Digital Materials Representation (DMR) approach in the light of the multiscale modelling of materials subjected to the complex strain paths. In some metal forming process-es, material undergoes complex loading history that introduces significant inhomogeneity of the strain. High strain gradi-ents, in turn, lead to high inhomogeneity of microstructure and make the prediction of the final material’s properties espe-cially complicated. Proper control of those parameters is very difficult and can be effectively optimised only if the numer-ical tools are involved. The 3D Digital Materials Representation approach is presented and introduced in the present paper into a multiscale finite element model of two metal forming processes characterised by high microstructural gradients: the cyclic torsion deformation and the Accumulative Angular Drawing (AAD). Due to a combination of the multiscale finite element model with the DMR approach, detailed information on strain inhomogeneities was obtained in both investigated processes. Key words: 3D digital material representation, multiscale modelling, strain path changes

1. INTRODUCTION

During manufacturing processes, metal may be subjected to complex strain path changes that intro-duce high level of both deformation and microstruc-tural inhomogeneity and make the prediction of ma-terial behaviour extremely difficult. Existing numer-ical tools are powerful and offer various possibili-ties, however there are still limitations in the model-ling of processes that are characterised by non-linear and non-symmetrical deformation modes. Problem of strain path change on the microstructure evolution and mechanical behaviour has been widely studied, both theoretically and experimentally (Davenport et al, 1999; Jorge-Badiola & Gutierrez, 2004). It has

been found that this processing parameter signifi-cantly retards recrystallization, precipitation and phase transformation kinetics during hot defor-mation of steels. Strain path changes applied during cold deformation also play important role in the control of strain and microstructure inhomogeneity. High local strain accumulation leads to significant grain refinement and significantly improves strength of the material but in some cases (severe plastic deformation methods) decreases ductility of the ma-terial. Understanding of the strain path in the light of aforementioned problems is therefore of paramount importance.

Computer modelling needs to be involved in or-der to learn how to control the microstructure and

Page 65: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 259 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

deformation inhomogeneity during complex loading processes. Due to nonlinearity and lack of sym-metry, simulation of deformation involving complex strain path changes requires 3D models to be creat-ed. As most of the microstructural phenomena dur-ing deformation take place at various scales, mul-tiscale modelling approach should also be consid-ered. Proper representation of the microstructural features can be effectively done with utilisation of the recently developed Digital Materials Representa-tion (DMR) (Madej et al., 2011) technique, where microstructure is explicitly represented by properly divided heterogeneous finite element mesh.

In the present paper, the 3D Digital Materials Representation approach is presented and incorporated into a mul-tiscale finite element model of two metal forming processes characterised by high microstructural gradients. The first case study involves cyclic torsion defor-mation of the FCC structure, whereas in the second case study, Accumulative Angular Drawing (AAD) process of the BCC structure is modelled.

2. EXPERIMENTAL INVESTIGATION

2.1. Forward/reverse torsion test The effect of strain reversal on austenite subject-

ed to strain path reversal was studied in torsion using model alloy system with a chemical composition of 0.092C-30.3Ni-1.67Mn-1.51Mo-0.19Si (in wt. %). Since in Fe-30wt%Ni systems austenite phase is stable down to room temperature and they are char-acterised by similar Stacking Fault Energy and high temperature flow behaviour as low carbon steels, they are widely used to model the austenite phase of those materials. The initial microstructure of the studied material represented by EBSD map is shown in figure 1. Solid bar torsion specimens with gauge length of 20mm and gauge diameter of 10mm were machined out of the solution treated plate. Torsion test was carried out using servo-hydraulic torsion rig at 840ºC with strain rate of 1/s. Two deformation routes were applied, both with the same equivalent total strain of 2. In the first case, 4 cycles of for-ward/reverse with strain of 0.25 per pass (8-passes in total) were applied. In the second case, 2-passes of deformation with the strain of 1 per pass and only one reversal were applied.

Deformed microstructures observed using opti-cal microscopy are shown infigure1 b, c. It can be seen that in both cases, the original shape of austen-ite grains has been restored. In the case of 2-pass deformation, however, the initial austenite micro-structure has been subdivided into well-developed lamellar structures separated by high angle grain boundaries (Sun et al., 2011). The recorded flow curves are summarized in figure 2. In both cases, the strain level upon reversal is lower what suggests occurrence so called Bauschinger effect – due to rearrangement of the substructure upon reversal the

dislocation density in the reversed structure is lower. Additionally, based on the flow curves it can be seen that this effect has been multiplied in the 8-pass test.

Fig. 2. Flow curves recorded during cyclic torsion deformation of Fe-30wt.%Ni.

The present study confirmed that strain path ef-fect represent one of the most important processing parameters characterising hot metal forming pro-cesses. Various austenite state as an effect of differ-ent strain path in steel is crucial since it affects the subsequent phase transformations and thus its prod-ucts what in turn has an effect on the properties of the final materials. Computer modelling of such problems can put some new insight into understand-

a) b) c)

Fig. 1. Electron Backscatter Diffraction (EBSD) map of the initial austenite micro-structure; optical microstructures of the deformed samples using deformation route 1 and 2 -b),-c) respectively.

Page 66: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 260 –

ing and optimisation of the processes carried out with strain path changes.

2.2. Accumulative Angular Drawing (AAD)

process In order to carry out the study of the AAD pro-

cess, a special die was designed such that an ordi-nary drawing bench could be used (Wielgus et. al., 2010). Microalloyed steel (0.07C/1.37Mn/ 0.27Si//0.07Nb/0.009N) supplied as a wire rod, with homogenous equiaxed ferrite microstructure and the mean grain size of 15μm, was used in this study. The 6.5mm diameter wire rods were drawn down to the diameter of 4 mm through the set of three dies (in two passes of drawing) with the total strain of 0.97. Although the AAD design allows various combina-tions of die positioning to be used, the present study was concentrated on the stepped die positioning, in which the offset from the drawing line between the successive dies was equal to 15°.

Optical and electron microscopy observa-tions have shown high level of microstructure inhomogeneity, i.e. substantial grain refinement was achieved in the transverse section of the wires, in the areas near the surface. Grains were also elongated along the wire axis. The depend-ence of grain shape, size and distribution on the transverse cross section on the processing route is clearly seen in figure3. The refinement of the microstructure is localised in the near-surface layers, however, with various intensities.

The presented work confirmed that the strain path applied in the AAD process affects directly the microstructure and texture changes in the final prod-uct. It is a combined effect of: reduction of the area, strain accumulation in the outer part of the wire due

to bending/unbending process, and desired shear deformation. Again numerical modelling can be a valuable support to the experimental research on these effects.

3. NUMERICAL INVESTIGATION

The main aim of the present work was to study whether combination of the multiscale finite element modelling with 3D DMR approach can be used to effectively model complex deformation processes that were described in the previous chapter. Calcula-tions were performed using Abaqus Stand-ard/Explicit package. In both cases, material behav-iour was described using elasto-plastic model with combined isotropic-kinematic hardening (Lemaitre & Chaboche, 1990). The evolution law of this model consists of the two main components: a nonlinear kinematic hardening component which describes the translation of the yield surface in stress space through the backstress α:

N

kk

plkk

plkk C

10 ,1

(1)

where, αk is the backstress, N, is the number of back-stresses, σ0 the equivalent stress defining the size of the yield surface and Ck and γk are material parame-ters; and an isotropic hardening component describ-ing the change of the equivalent stress defining the size of the yield surface, as a function of plastic de-formation:

beQ

100 (2)

where, σ0 is the yield stress at zero plastic strain and Q∞ and b are material parameters. The model is

a) b) c) d)

Fig. 3. Initial microstructures of the studied material taken in the longitudinal –a) and transverse –b) cross-section. Euler angle maps of the deformed wires taken near the surface –b) and in the centre –d) of the longitudinal cross section of the deformed wire.

Page 67: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 261 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

based on the two major model parameters Ck (the initial kinematic hardening moduli) and γk (rate at which the kinematic hardening moduli decrease with increasing plastic deformation). These parameters can be specified directly, calibrated based on a half-cycle test data (unidirectional tension or compres-sion), or can be obtained based on the test data from a stabilized cycle (when the strain-stress curve no longer changes shape from one cycle to next).

Parameters of the model have been identified us-ing inverse approach based on data from cyclic tor-sion test that was performed on studied materials at both deformation temperatures. Example of the comparison of the torque vs. angle data calculated using calibrated hardening model and data measured experimentally (cyclic torsion test) are presented in figure 5. It can be seen that the divergence of the model and experiment is good what proves the accu-racy of the applied methodology.

3.1. Multiscale model of the cyclic torsion test

The multiscale model of the torsion test was de-

signed as seen in figure 4. Submodelling technique was used to bridge different scales. Global model of strain gauge was prepared and analysed using Abaqus Standard code. Next, the submodel was generated using the DMR approach and calculations were performed again. A unit cell (100 μm 100 μm 100 μm) with 37 grains was created to capture the effect of the process on inhomogeneity of both strain and microstructure. The parameters of the combined material hardening model applied in the submodel were additionally diversified using the Gauss distri-bution function to reflect differences in the crystal-lographic orientations.

Equivalent von Mises stress distributions in the global model and in unit cell during the first for-ward/reverse cycle of torsion test are presented in figure 6a, b. It can be seen that the application of the multiscale modelling approach and its combination with 3D DMR approach resulted in much higher accuracy of the results compared to simulation using only the global model (figure 6a). Global material response obtained from both models can be similar to some extend, however macro scale model ne-glects inhomogeneities occurring along microstruc-ture features. Additionally 3D DMR properly cap-tured not only inhomogeneites in stress or strain state but also grain shape changes – as an effect of strain reversal (figure 6c). It can be seen that the first pass of cyclic deformation caused grain rotation. Its

original position was then restored after strain rever-sal and application of the second pass of defor-mation with the same strain level applied in the op-posite torsion direction. Macro scale model is unable to provide such detailed results. Due to presented advantages authors decided to apply the same ap-proach to model the AAD process.

Fig. 4. Multiscale model of the cyclic torsion test.

Fig. 5. Comparison of the measured and calculated torque vs. twist angle.

3.1. Multiscale model of the AAD process Multiscale model of the AAD process due to its

complexity requires two steps of submodelling as presented in figure 7. First, global model with 42000 eight-node hexagonal reduced integration elements with hourglass control (C3D8R) was realized using Abaqus Explicit. Drawing of 300 mm long wire with the initial diameter of 6.5mm was modelled. Tools were meshed with quad-dominated discrete rigid elements (R3D). Furthermore, the analysis was rep-licated on the smaller cylindrical area (10mm long) subdivided from the global model using Abaqus Standard and much finer mesh was used. Finally, second submodel was generated using the 3D DMR approach and calculations were performed again,

Page 68: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 262 –

using Abaqus Standard. Set of 5 unit cells (100 μm x100 μm x 100 μm) containing 37 grains each was created to capture the effect of the process on inho-mogeneity of both strain and microstructure.

Obtained global equivalent plastic strain distri-butions on the surface and on the transversal cross section of the drawn wire after the first pass are pre-sented in figure 8.

Fig. 8. Examples of calculations. Equivalent plastic strain in drawn wire after 1st pass of drawing. Global model and unit cells attached at various positions of the wire’s cross-section.

Fig. 6. Von Mises stress distributions in global –a) and submodel –b) Equivalent plastic strain distribution in the selected grain –c)

Fig. 7. Multiscale model of the AAD process.

Page 69: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 263 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

It can be noticed that the inhomogeneity of strain that is characteristic for this deformation process was properly captured by the applied model. Again, much more detailed information regarding strain localisation and inhomogeneities can be extracted from the submodels in comparison to macro scale model predictions. The 3D DMR approach show different levels of strain inhomogeneity, localisation and distortion across subsequent grains resulting from the AAD process. Higher strain accumulation near the wire surface was also predicted by the com-puter model (figure 8d, e, f).

It can be seen that application of the 3D DMR approach for the modelling of AAD can be an effec-tive support of the experimental research.

5. CONCLUSIONS

Two complex loading cases with high local strain accumulation were simulated using multiscale FEM model combined with 3D Digital Materials Representation approach. Based on the presented modelling results it can be concluded that the ap-plied modelling strategy was able to catch most of the important phenomena accompanying processes with complex deformation modes with reasonably good accuracy. Future research will focus on appli-cation of the crystal plasticity model integrated with DMR what will even more extend predictive capa-bilities of the proposed methodology.

Acknowledgements. Financial support the Polish

Ministry of Science and Higher Education (grant no. N N508583839) is gratefully acknowledged. FEM calculations were realised at ACK AGH Cyfronet Computing Centre under grant no: MNiSW/ IBM_BC_HS21/AGH/075/2010.

REFERENCES

Davenport, S.B., Higginson, R.L., Sellars, C.M., 1999, The effect of strain path on material behaviour during hot rolling of FCC metals, Philosophical Transactions of the Royal Society of London A, 357, 1645-1661.

Jorge-Badiola, D., Gutierrez, I., 2004, Study of the strain rever-sal effect on the recrystallization and strain-induced pre-cipitation in a Nb-microalloyed steel, Acta Materialia, 52, 333-341.

Lemaitre, J.,Chaboche, J.L., 1990, Mechanics of Solid Materi-als, Cambridge University Press.

Madej, L., Rauch L., Perzynski, K., Cybulka, P., 2011, Digital Material Representation as an efficient tool for strain in-homogeneities analysis at the micro scale level, Archives of Civil and Mechanical Engineering, 11, 661-679.

Sun, L., Muszka, K., Wynne, B.P., Palmiere, E.J., 2011, The effect of strain path reversal on high-angle boundary formation by grain subdivision in a model austenitic steel, Scripta Materialia, 64, 280-283.

Wielgus, M., Majta, J., Łuksza, J., Packo, P., 2010, Effect of strain path on mechanical properties of wire drawn products, Steel Research International, 81, 490-493.

ZASTOSOWANIE TRÓJWYMIAROWEJ CYFROWEJ REPREZENTACJI MATERIAŁU DO MODELOWANIA

NIEJEDNORODNOŚCI MIKROSTRUKTURY W PROCESACH CHARAKTERYZUJĄCYCH SIĘ

ZMIENNĄ DROGĄ ODKSZTAŁCENIA

Streszczenie W pracy przedstawiono możliwości wykorzystania trójwy-

miarowej Cyfrowej Reprezentacji Materiału do wieloskalowego modelowania materiałów odkształcanych w warunkach zmien-nej drogi odkształcania. W procesach przeróbki plastycznej materiał poddawany jest złożonej historii odkształcania, która charakteryzuje się dużą niejednorodnością odkształcenia. Duży gradient odkształcenia prowadzi z kolei do niejednorodności rozwoju mikrostruktury i powoduje, że przewidywanie własno-ści wyrobu finalnego staje się szczególnie skomplikowane. Odpowiednia kontrola tych parametrów jest utrudniona i może być efektywnie optymalizowana jedynie w przypadku, gdy zostanie wsparta narzędziami numerycznymi. Podejście przed-stawione w niniejszej pracy procesu zostało zastosowane do modelowania dwóch procesów przeróbki plastycznej charakte-ryzujących się zmienną drogą odkształcania: procesu cykliczne-go odkształcania na drodze skręcania oraz procesu Kątowego Wielostopniowego Ciągnienia (KWC). W pracy wykazano, że połączenie wieloskalowego modelu MES wraz z trójwymiarową Cyfrową Reprezentacją Materiału wpływa na znaczą poprawę dokładności uzyskiwanych wyników w przypadku modelowania niejednorodności odkształcenia w rozpatrywanych procesach przeróbki plastycznej.

Received: October 17, 2012 Received in a revised form: November 26, 2012

Accepted: December 11, 2012

Page 70: CMMS_nr_2_2013

264 – 268 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

IDENTIFICATION OF INTERFACE POSITION IN TWO-LAYERED DOMAIN USING GRADIENT METHOD

COUPLED WITH THE BEM

EWA MAJCHRZAK1*, BOHDAN MOCHNACKI2

1 Institute of Computational Mechanics and Engineering, Silesian University of Technology, Konarskiego 18a, 44-100 Gliwice, Poland

2 Higher School of Labour Safety Management, Bankowa 8,40-007 Katowice, Poland *Corresponding author: [email protected]

Abstract

The non-homogeneous domain being the composition of two sub-domains is considered, at the same time the posi-tion of internal interface is unknown. The additional information necessary to solve the identification problem results from the knowledge of temperature field at the set of points X selected from the domain analyzed. From the practical point of view the points X should be located at the external surface of the system. The steady temperature field in domain considered is described by two energy equations (the Laplace equations), continuity condition given on the contact sur-face and the boundary conditions given on the external surface of domain. To solve the inverse problem the gradient method is used. The sensitivity coefficients appearing in the final form of equation which allows one to find the solution using a certain iterative procedure are determined by means of the implicit approach of shape sensitivity analysis. This approach is especially convenient in the case of boundary element method application (this method is used at the stage of numerical algorithm construction). In the final part of the paper the examples of computations are shown. Key words: heat transfer, inverse problem, gradient method, boundary element method

1. INTRODUCTION

The following boundary value problem is con-sidered

2 2

2 2

( , ) ( , )( , ) : λ 0, 1, 2e ee e

T x y T x yx y ex y

(1)

where index e corresponds to the respective sub-domains, λe is the thermal conductivity, T, x, y de-note the temperature and spatial co-ordinates, re-spectively. The equation (1) is supplemented by the typical boundary conditions, in particular

11 1

( , )( , ) : λ α ( , )ex aT x yx y T x y T

n

(2)

where Γex is the external surface of domain marked in figure 1, Ta is the ambient temperature,α is the heat transfer coefficient, ∂T1 /∂n denotes the normal derivative.

On the surface between sub-domains the conti-nuity of heat flux and temperature field is assumed, this means

1 21 2

1 2

( , ) ( , )λ λ( , ) :

( , ) ( , )c

T x y T x yn nx y

T x y T x y

(3)

Page 71: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 265 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

On the internal surface in (c.f. figure 1) the Di-richlet condition is taken into account

2( , ) : ( , )in bx y T x y T (4)

On the remaining parts of boundary the no-flux condition can be accepted.

As is well known, when the thermophysical and geometrical parameters appearing in the mathemati-cal model of the process considered are given then the direct problem is formulated and the temperature distribution in the domain can be found.

The inverse problem considered here bases on the assumption that the temperature distribution at the boundary Γex is known (e.g. thermographs), while the position of Γc is unknown (Ciesielski & Mochnacki, 2012; Romero Mendez et al., 2010).

Fig. 1. Domain considered

2. SOLUTION OF DIRECT PROBLEM BY MEANS OF THE BOUNDARY ELEMENT METHOD

The boundary integral equations corresponding to the Laplace equations (1) are the following (Brebbia & Dominguez, 1992 ; Majchrzak, 2001)

(5)

where B(, )(0, 1) is the coefficient connected with the local shape of boundary, () is the obser-vation point, qe(x, y) = - e Te (x, y)/n, n = [nx, ny] and Te

*(, , x, y) is the fundamental solution

* 1 1( , , , ) ln2e

e

T x yr

(6)

where r is the distance between the points (, ), (x, y) and

*

*2

( , , , )( , , , ) 2

ee e

T x y dq x yn r

(7)

while

( ) ( )x yd x n y n (8)

In numerical realization of the BEM the bounda-ries are divided into boundary elements and the inte-grals appearing in equations (5) are substituted by the sums of integrals over these elements. After the mathematical manipulations one obtains two sys-tems of algebraic equations (Majchrzak, 2001)

e e e eG q H T (9)

Now, the following notation is introduced (c.f. figure 1) 1 2 1 2

1 1 1 1 1 1, , , , ,ex exT T T q q q are the vectors of functions T and q at the boundary Γ1Γ2Γex of domain Ω1,

1 2 1 2, , ,c c c cT T q q are the vectors of func-tions T and q on the contact surface Γc between sub-domains Ω1 and Ω2,

3 4 3 42 2 2 2 2 2, , , , ,in inT T T q q q are the vectors

of functions T and q at the boundary Γ3Γ4Γin of domain Ω2

and then one has for sub-domain Ω1

1 11 1

1 2 1 21 11 1 1 1 1 1 1 12 2

1 1

1 1

ex exex ex

c c

c c

q Tq T

G G G G H H H Hq Tq T

(10)

for sub-domain Ω2

2 23 32 23 4 3 4

2 2 2 2 2 2 2 22 24 42 2

c c

in inc cin in

q Tq T

G G G G H H H Hq Tq T

(11)

The continuity condition (3) written in the form

1 2

1 2

c c

c c

q q qT T T

(12)

x

y

external surface ex

c

in

( , ) : ( , ) ( , )e eB T

*( , , , ) ( , )d = e

e e eT x y q x y

*( , , , ) ( , )d

e

e e eq x y T x y

Page 72: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 266 –

allows one to couple the equations (10), (11). Taking into account the remaining boundary conditions finally one obtains

or

AY B (14)

where A is the main matrix of the system of equa-tions (13), Y is the vector of unknowns and B is the free terms vector.

The system of equations (14) allows one to find the ,missing’’ boundary values. Knowledge of nodal boundary temperatures and heat fluxes constitutes a basis for computations of internal temperatures at the optional set of points selected from the domain considered.

3. SOLUTION OF INVERSE PROBLEM USING GRADIENT METHOD COUPLED WITH THE BEM

The inverse problem considered here bases on the assumption that the temperature distribu-tion at the boundary Γex is known, while the position of Γc is unknown. The surface Γc is defined by the set of points (xn, yn), n = 1, 2, …, N. The aim of investigations is to determine the values of shape parameters b1, b2, …, bN which correspond to the co-ordinates yn shown in fig-ure 2.

The criterion which should be minimized is of the form (Kurpisz & Nowak, 1995; Burczyński, 2003)

21

1

1( , . . . , , . . . , ) ( )M

n N i d ii

S b b b T TM

(15)

where Tdi , Ti are the temperatures known from the measurements and calculated ones, respectively. In this paper the real measurements are substituted by the temperatures Ti obtained from the direct problem solution for arbitrary assumed position of points (xn, yn).

Using the necessary condition of optimum, one obtains

1

2 ( ) 0, 1, 2, . . . ,M

ii d i

in n

TS T T n Nb M b

(16)

The function Ti is expanded into the Taylor se-ries taking into account the first derivatives

1

1( )

kj j

Nk k ki

i i j jj j b b

TT T b bb

(17)

where bj0 is the arbitrary assumed value of parameter

bj, while for k > 0 it results from previous iteration. Introducing (17) into (16) one has

(18)

where

,k

j j

k ii j

j b b

TUb

(19)

are the sensitivity coefficients. From the system of equations (18) the values bj

k+1 are calculated. To de-termine the sensitivity coefficients the methods of shape sensitivity analysis are used (Kleiber, 1997; Majchrzak et al., 2011; Freus et al., 2012). Here the implicit differentiation method, which belongs to the discretized approach, basing on the differentiation of algebraic boundary element matrix equations (14) is applied. So, the system of equations (14) should be differentiated with respect to parameter bj and then

j j jb b b

A Y BY + A (20)

or

1, ,

1 1( )

M Nk k k ki j i n j j

i jU U b b

,

10, 1, 2, . . . ,

Mk k

d i i i ni

T T U n N

11

12

11 21 1 1 1 1 1 1

3 42 2 2 2 2 2

32

242

α α

ex

ex ex exc c a

in inc c b

in

TT

TTT

H G H H H G 0 0 0 GT0 0 0 H G H G H Gq

TqT

(13)

Page 73: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 267 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

j j jb b b

Y B AA Y (21)

It should be pointed out that the derivatives of the boundary element matrices are calculated analyt-ically (Majchrzak et al., 2011).

4. RESULTS OF COMPUTATIONS

The domain of dimensions 2LL (L = 0.02 m) shown in figure 1 has been considered. At first, the direct problem described in the chapter 1 has been solved. The following input data have been intro-duced: thermal conductivities λ1 = 0.1 W/(mK), λ2 = 0.2 W/(mK), heat transfer coefficient α = 10 W/(m2K), ambient temperature Ta = 20oC (c.f. condition (2)), boundary temperature Tb = 37oC (c.f. condition (4)). The shape of internal surface Γc has been assumed in the form of parabolic function (in this place the optional shapes can be taken into ac-count)

2

2

0.8 ( ) ( 2 )( ) pL x L y x x L

y xL

, (22)

where (L, yp) = (0.02 m, 0.012 m) is the tip of parab-ola. The discretization of boundaries using the linear boundary elements is shown in figure 2.

Fig. 2. Discretization of boundaries

In figure 3 the temperature distribution in the domain considered is presented, while figure 4 illus-trates the course of temperature at the external sur-face.

To solve the inverse problem, 29 shape sensitivi-ty coefficients corresponding to the y co-ordinate of nodes from 29 = 68 to 47 = 50 (figure 2) has been distinguished. The nodes 28 and 48 are fixed – c.f. equation (20). So, 29 additional problems connected with the determination of sensitivity functions have been formulated.

Fig. 3. Temperature distribution in the domain considered

Fig. 4. Temperature distribution at the external surface

Fig. 5. Results of identification – variant 1

The identification problem has been solved un-der the assumption that the temperatures at the nodes from 5 to 23 (figure 2) are known and the initial position of internal boundary is described by func-tion (22) where yp = 0.015 m or yp = 0.011 m (the different start points allow ones to observe the course of iteration process). In figures 5 and 6 the results of computations are shown. It is visible that

Page 74: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 268 –

for exact input data the exact position of boundary is obtained and the iteration process is convergent.

Fig. 6. Results of identification – variant 2

6. CONCLUSIONS

The algorithm proposed can be useful, among others, in the medical practice (estimation of wound shape on the basis of surface temperature distribu-tion). It should be pointed out that both from the mathematical and numerical points of view the prob-lem is rather complicated, but taking into account the practical applications it seems that the scientific research in this scope should be continued. The algo-rithm proposed allows one to identify the complex shapes of internal boundary (the co-ordinates yn are estimated separately). In a similar way the 3D prob-lems can be also solved. In the future the detailed research of iterative procedure convergence should be also done.

REFERENCES

Brebbia, C.A., Dominguez J., 1992, Boundary elements, an introductory course. CMP, McGraw-Hill Book Company, London.

Burczyński, T., 2003, Sensitivity analysis, optimization and inverse problems, Boundary element advances in solid mechanics, eds, Beskos, D., Maier, G., Springer Verlag, Vien, New York, 245-307.

Ciesielski, M., Mochnacki, B., 2012, Numerical analysis of interactions between skin surface temperature and burn wound shape, Scientific Research of Institute of Mathe-matics and Computer Science, 1, 15-22.

Freus, S., Freus, K., Majchrzak, E., Mochnacki, B., 2012, Identi-fication of internal boundary position in two-layers do-main on the basis of external surface temperature distri-bution, CD-ROM Proceedings of the 6th European Con-gress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2012), eds, Eberhardstei-

ner, J., Bohm, H.J., Rammerstorfer, F.G., Vienna Uni-versity of Technology, Vienna, Austria,

Kleiber, M., Parameter sensitivity, 1997, J.Wiley & Sons Ltd., Chichester.

Kurpisz, K., Nowak, A.J. Inverse Thermal Problems, 1995, Computational Mechanics Publications, Southampton-Boston.

Romero Mendez, R., Jimenez-Lozano, J.N., Sen, M., Gonzalez, F.J., 2010, Analytical solution of a Pennes equation for burn-depth determination from infrared thermographs, Mathematical Medicine and Biology, 27, 21-38.

Majchrzak, E., 2001,Boundary element method in heat transfer, Publ. of the Czestochowa University of Technology, Czestochowa (in Polish).

Majchrzak, E., Freus, K., Freus, S., 2011, Shape sensitivity analysis. Implicit approach using boundary element method, Scientific Research of the Institute of Mathematics and Computer Science, 1, 151-162.

WYKORZYSTANIE METODY GRADIENTOWEJ I MEB DO IDENTYFIKACJI KSZTAŁTU GRANICY MIĘDZY

PODOBSZARAMI W DWUWARSTWOWYM NIEJEDNORODNYM OBSZARZE CIAŁA STAŁEGO

Streszczenie W pracy rozpatruje się niejednorodny obszar ciała stałego

będący złożeniem dwóch podobszarów, przy czym położenie powierzchni granicznej nie jest znane. Dodatkową informacją pozwalającą rozwiązać sformułowane w ten sposób zadanie odwrotne są wartości temperatury w punktach X wyróżnionych w rozpatrywanym obszarze. Z praktycznego punktu widzenia punkty przyłożenia sensorów powinny być zlokalizowane na powierzchni zewnętrznej pozostającej w kontakcie z otocze-niem. Model matematyczny procesu tworzy układ równań elip-tycznych (równań Laplace’a), warunki idealnego kontaktu na powierzchni kontaktu i warunki zadane na powierzchniach zewnętrznych. Rozwiązanie zadania uzyskano metodą gradien-tową, a współczynniki wrażliwości występujące w układzie rozwiązującym wyznaczono wykorzystując niejawne podejście analizy wrażliwości, które jest szczególnie efektywne w przy-padku zastosowania metody elementów brzegowych (tę metodę wykorzystano na etapie konstrukcji algorytmu numerycznego). W końcowej części artykułu zamieszczono wyniki obliczeń numerycznych.

Received: October 2, 2012 Received in a revised form: October 22, 2012

Accepted: November 9, 2012

Page 75: CMMS_nr_2_2013

269 – 275 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

INFLUENCE OF THE SAMPLE GEOMETRY ON THE INVERSE DETERMINATION OF THE HEAT TRANSFER COEFFICIENT DISTRIBUTION ON THE AXIALLY SYMMETRICAL SAMPLE

COOLED BY THE WATER SPRAY

AGNIESZKA CEBO-RUDNICKA*, ZBIGNIEW MALINOWSKI, BEATA HADAŁA, TADEUSZ TELEJKO

Faculty of Metals Engineering and Industrial Computer Science, Department of Heat Engineering and Environment Protection,

AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Kraków, Poland *Corresponding author: [email protected]

Abstract

The paper presents the results of the heat transfer coefficient determination while the water spray cooling process. To determine the boundary condition over the metal surface cooled by water spray the inverse heat conduction problem has been used. In the investigations the axially symmetrical sample has been used as a cooled object. Because of the specific setup of the sensor used in investigations, two finite element models have been tested in the inverse determination of the heat transfer coefficient. The first one, which simplifies the sensor geometry to a cylinder and the second one, that de-scribes the real shape of the sensor. Also, the comparison between two different models employed to determine the heat transfer coefficient over the cooled sample surface have been presented. The boundary condition models differ in descrip-tion of the function that has been employed to approximate the heat transfer coefficient distribution over the cooled sur-face in the time of cooling. Key words: water spray cooling, heat transfer coefficient, boundary inverse problem, finite element method

1. INTRODUCTION

In the metal industry the water cooling is widely used to control the product temperature variation in the production process. Continuous casting lines are equipped with the water spray secondary cooling zones. The main goal in these case is to ensure suffi-cient heat transfer from the ingot surface to achieve a proper solidification structure. The industrial hot rolling mills are equipped with systems for con-trolled cooling of hot steel products. In the case of strip rolling mills the main cooling system is situated at run-out table to ensure the required strip tempera-ture before coiling (Tacke et al., 1985; Malinowski et al., 2012). The proper cooling rate affects the final

mechanical properties of products which strongly dependent on microstructure evolution processes. Numerical simulations can be used to determine the water flux which should be applied in order to en-sure desired product temperature. The heat transfer boundary condition in case of water cooling is de-fined by the heat transfer coefficient (HTC). Due to complex nature of the cooling process the existing heat transfer models are not accurate enough in the case of high temperature processes common in metal industry. Also, the direct measurements of the HTC by such methods as mass transfer or transient meth-od that uses liquid crystals to measure the surface temperature cannot be used in the case of steel in-dustry processes (Mascarenhas & Mudawar, 2010;

Page 76: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 270 –

Liu et al., 2012). For such processes the best way to determine the HTC is to formulate the boundary inverse heat conduction problem (IHCP). There, HTC can be determined as a function of cooling parameters and product surface temperature. In in-verse algorithm various heat conduction models and boundary condition models can be implemented. In the paper the results of the inverse calculation of HTC have been presented. The calculations have been performed on the basis of temperature meas-urements inside selected points of axially symmet-rical sample cooled by water spray. The experi-mental investigations have been conducted for two materials: inconel and brass.

2. BOUNDARY INVERSE MODEL

The HTC on the cooled surface of the cylinder can be determined from the inverse solution to heat transfer problem by minimizing the objective func-tion defined as:

∑ ∑ (1)

where: pi is the vector of the unknown parameters to be determine by minimizing the objective function, Nt – number of the temperature sensors, Np –number of the temperature measurements performed by one sensor in the time of cooling, – the sample temperature measured by the sensor m at the time τn,

– the sample temperature at the location of the sensor m at the time τn calculated from the finite element solution to the heat conduction equation: , , , , , , 0 (2) where: T – temperature, τ – time, r, z – cylindrical coordinates, qv – internal heat source, λ – thermal conductivity, c – specific heat, ρ – density.

In the finite element model employed to solve equation (2) linear shape functions have been used. Descriptions of the model with linear shape func-tions has been presented in the paper of Gołdasz et al. (2009).

The heat transfer boundary condition on the cooled surface of the metal cylinder has been ex-pressed as a function of surface radius and time:

, , , (3)

where: Ts – cooled sample surface temperature, Ta – cooling water temperature, – heat flux, h – heat transfer coefficient.

Variation of the heat transfer coefficient h at the metal surface in time of cooling has been approxi-mated by two HTC models. In the first model (mod-el A) average HTC over the cooled surface as a function of the time of cooling and average sample surface temperature has been determined. In the second model (model B) HTC distribution over the cooled surface has been approximated by the witch of Agnesi type function with the expansion in time of the HTC parameters Cebo-Rudnicka et al. (2012).

3. PROBLEM FORMULATION

In the present study the boundary condition over the surface of the metal sample cooled by water spray has been sought. The sample has a form of a cylinder 20 mm in height and 20 mm in diameter. The top surface of the cylinder has been cooled by the water sprays. The cylinder has been completed with a flange 30 mm in diameter and 1 mm thick-ness and it has been placed in cylindrical housing. The space between the cylinder and the housing has been filled with air, that allows to reduce the heat losses to the surrounding. However, the flange which allows to join the cylinder and housing, caus-es that the sample temperature field is not perfectly one dimensional. In figure 1 the schematic illustra-tion of the experimental setup which consists of the cylindrical sensor, flange and housing has been pre-sented. The cylinder with flange, as well as housing have been made from the same material. To measure the temperature inside the cooled sample three fast response, NiCr - NiAl thermocouples have been used. Thermocouples have been placed in the sym-metry axis of the cylinder in the distance of: 2, 4 and 6 mm from the cooled surface.

Fig. 1. Schematic illustration of the experimental setup em-ployed for the determination of the heat transfer boundary condition.

The experimental tests have been performed for two materials, which differ substantially in thermal

Page 77: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 271 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

conductivity. Inconel and brass samples have been selected for the study. The initial temperature to which the materials have been heated up was 730°C for inconel and 517°C for brass. The water spray pressure in both testes was 1 MPa and water temper-ature was equal to 20°C. The water flux was 38.6 kg/(m2·s) while cooling inconel sample and 1 kg/(m2·s) for cooling the brass sample. The tem-perature measurements logged while experimental tests have been assumed as an input data in inverse calculation of HTC. Because of the sensor construc-tion two finite element models have been tested in the inverse determination of the heat transfer coeffi-cient. The first finite element model simplifies the sample geometry to the perfect cylinder (the simpli-fied model). In the case of the second model the cylindrical sample and the adapter ring (flange) have been described by the finite element mesh (the exact model). Further, two boundary condition models have been employed in equation (3) in order to de-termine the heat transfer coefficient on the sample surface.

4. RESULTS OF INVESTIGATIONS

The results of the inverse calculations have al-lowed to determine the influence of the sample ge-ometry description on the heat transfer coefficient identification. In figures 2 to 5 the comparison be-tween HTC variations in the cooling process calcu-lated for simplified and exact description of the sample geometry in the finite element model have been presented. The figures present variations in the average values of HTC (boundary condition model A) versus the time of cooling (figures 2 and 4) and versus the average sample surface temperature (fig-ures 3 and 5).

In case of water spray cooling of inconel sample, the simplification in sample geometry description to the perfect cylinder does not effects the average HTC for the mean sample surface temperature from 730°C to about 250°C (figure 3). This range of tem-perature corresponds to the film and transition boil-ing regimes that take place on the sample surface while water spray cooling process. During that pro-cesses the vapor film is formed on the cooled surface and it limits the heat transfer between the cooled surface and the cooling water. Additionally, low heat conductivity of inconel causes that heat transfer to the flange is low and does not influence the average HTC in these two boiling regimes. The heat transfer in radial direction to the flange increases while the

surface temperature decreases. Below 250°C (figure 3) the heat transfer process changes to the nucleate boiling. That results in significant increase in the HTC values. Simultaneously heat conduction in radial direction is more significant. These to pro-cesses affect the inverse determination of HTC. Therefore the sample geometry simplification in the finite element model to the perfect cylinder results in the HTC values about 10 percent higher if compared to those obtained with the real sample geometry description (with flange) in the finite element model of heat transfer (figures 2 and 3). The average dif-ference between the calculated and measured tem-peratures has been equal to 7.95°C and has not de-creased for the better definition of sample geometry (table 1).

Table 1. The average difference between measured and calcu-lated temperatures at the thermocouples locations.

The inverse calculations performed on the basis

of temperature measurements obtained for the spray cooling of brass sample have indicated a significant influence of the sample geometry description on the average HTC values (figures 4 and 5). The thermal conductivity of the brass is much higher than the inconel one. In such a case heat transfer to the flange is much more important and the exact description of the cooled sample geometry plays an important role in the HTC identification. Neglecting the flange in the definition of the sample geometry has caused that the calculated values of HTC in the whole spray

Case of study

Inconel sample Brass sample

Average difference in temperatures,

°C

Average dif-ference in

temperatures, °C

Average HTC over the cooled surface calculated for simplified definition of the sample geome-try in the finite element model

7.953 4.418

Average HTC over the cooled surface calculated for exact

definition of the sample geome-try in the finite element model

7.953 3.795

Radial distribution of HTC over the cooled surface calculated for

simplified definition of the sample geometry in the finite

element model

7.735 5.924

Radial distribution of HTC over the cooled surface calculated for

exact definition of the sample geometry in the finite element

model

7.735 3.752

Page 78: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 272 –

cooling process have been greater for about 25 per-cent than the ones calculated by using the model with the flange (the exact geometry model) (figure 4 and 5). Moreover, in the case of brass sample the exact definition of the sample geometry (with flange) has lead to the lower difference between measured and calculated temperatures at the thermo-couple locations (table 1).

Fig. 2. The comparison of the average HTC variations in the time of cooling obtained for the simplified and exact definition of the sample geometry in the finite element model. Inconel sample.

Fig. 3. The average HTC variations as a functions of sample surface temperature obtained for the simplified and exact defini-tion of the sample geometry in the finite element model. Inconel sample.

The discussed above boundary condition model gives only average HTC over the cooled sample surface. In practice it is expected HTC distribution over the cooled surface. Such a possibility gives the second boundary condition model. Due to axially symmetrical problem only radial variation of HTC in the time of cooling has been modeled. The analysis has also been performed for two materials: inconel

and brass. Further, simplified and exact description of sample geometry in the finite element model has been considered. The results of the inverse calcula-tion of HTC distributions as functions of the sample radius and the time of cooling have been presented in figures to 9 for inconel sample and in figures 10 to 11 for the brass sample.

Fig. 4. The comparison of the average HTC variations in the time of cooling obtained for the simplified and exact definition of the sample geometry in the finite element model. Brass sample.

Fig. 5. The average HTC variations as a functions of sample surface temperature obtained for the simplified and exact defini-tion of the sample geometry in the finite element model. Brass sample.

The inverse solution to HTC distribution along the cooled sample radius performed for the inconel with simplified definition of the sample geometry in finite element model has not indicated a visible dif-ferences in HTC along the radius of the cooled sam-ple surface (figure 6 and 7). Exact description of the sample geometry (with flange) in the finite element model has resulted in lower of about 10% values of HTC (figure 8).

0 4 8 12 16Time, s

0

10000

20000

30000

40000

HTC

, W/(m

2 . K)

perfect cylindercylinder with flange

0 200 400 600 800Temperatrure,oC

0

10000

20000

30000

40000

HTC

, W/(m

2 . K)

perfect cylindercylinder with flange

0 5 10 15 20 25Time, s

0

4000

8000

12000

16000

20000

HTC

, W/(m

2 . K)

perfect cylindercylinder with flange

100 200 300 400 500 600Temperatrure,oC

0

4000

8000

12000

16000

20000

HTC

, W/(m

2 . K)

perfect cylindercylinder with flange

Page 79: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 273 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Fig. 6. HTC variation versus time of cooling for selected loca-tions along the cooled surface radius calculated for simplified definition of the sample geometry in the finite element model. Inconel sample, HTC model B.

Fig. 7. HTC variation versus surface temperature for selected locations along the cooled surface radius calculated for simpli-fied definition of the sample geometry in the finite element mod-el. Inconel sample, HTC model B.

Fig. 8. HTC variation versus time of cooling for selected loca-tions along the cooled surface radius calculated for exact defini-tion of the sample geometry in the finite element model. Inconel sample, HTC model B.

Fig. 9. HTC variation versus surface temperature for selected locations along the cooled surface radius calculated for exact definition of the sample geometry in the finite element model. Inconel sample, HTC model B.

Same difference of HTC distribution versus sample surface temperature for the HTC model B has been observed only at the sample and flange connection (r = 10 mm in figure 9). It can be ex-plained by the better description of the sample tem-perature near the flange by the exact geometry mod-el. Implementation of the HTC model which allows for the distribution of heat transfer coefficient result-ed in very similar solutions to the average HTC model. It can be explained by the high water flux applied in the cooling of inconel sample. In such a case sample surface is cooled uniformly. In the case of cooling brass sample the diversification of HTC along the cooled surface radius has been ob-served both for simplified and exact description of the sample geometry in the finite element model (figures 10 and 11). Implementation of the exact definition of the sample geometry and the HTC vari-ation over the sample surface in the finite element model has allowed to illustrate both the influence of thermal conductivity of sample material as well as the influence of the cylinder flange on the heat trans-fer between cooled sample and water spray. In the case of simplified definition of the sample geometry to the perfect cylinder in the finite element model the HTC values along the radius of the sample de-crease. The greatest difference between the maxi-mum HTC value in the cylinder axis and at the dis-tance of 10 mm from the symmetry axis is equal to about 9 kW/(m2·K) (figure 10). Implementation of the exact definition of the sample geometry results in much higher diversification of HTC values. In these case the difference between the maximum values of HTC is equal to about 17 kW/(m2·K) (fig-

0 4 8 12 16Time, s

0

10000

20000

30000

40000H

TC, W

/(m2 . K

)

r = 1mmr = 5 mmr = 7 mmr = 10 mm

0 200 400 600 800Temperature, oC

0

10000

20000

30000

40000

HTC

, W/(m

2 . K)

r = 1mmr = 5 mmr = 7 mmr = 10 mm

0 4 8 12 16Time, s

0

10000

20000

30000

40000

HTC

, W/(m

2 . K)

r = 1mmr = 5 mmr = 7 mmr = 10 mm

0 200 400 600 800Temperature, oC

0

10000

20000

30000

40000

HTC

, W/(m

2 . K)

r = 1mmr = 5 mmr = 7 mmr = 10 mm

Page 80: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 274 –

ure 11). In both considered cases of cooling the met-al samples exact definition of the sample geometry (with the flange) in the finite element model have resulted in lower average differences between meas-ured and calculated temperatures (table 1).

Fig. 10. HTC variation versus time of cooling for selected loca-tions along the cooled surface radius calculated for simplified definition of the sample geometry in the finite element model. Brass sample, HTC model B.

Fig. 11. HTC variation versus surface temperature for selected locations along the cooled surface radius calculated for exact definition of the sample geometry in the finite element model. Brass sample, HTC model B.

5. CONCLUSIONS

The conducted analysis has allowed to determine the influence of the exact definition of the cooled sample geometry in the finite element model on the solution to the heat transfer process. It has been shown that simplification of the sample geometry to the perfect cylinder in the finite element model re-sults in about 10 percent growth in the heat transfer coefficient determined by the inverse method in case

of material which is characterized by low heat con-ductivity (inconel) and about 25 percent growth in HTC value in the case of high conductivity materials (brass). Identification of HTC performed for two boundary condition models has shown that allowing for the HTC distribution over the cooled surface results in more accurate determination of the heat transfer boundary condition. The developed defini-tion of the boundary condition is capable of identifi-cation both constant and variable heat transfer coef-ficient over the cooled surface of the cylindrical sample.

Acknowledgements. The work has been fi-

nanced by the Ministry of Science and Higher Edu-cation of Poland, Grant No NR15 0020 10.

REFERENCES

Cebo-Rudnicka, A., Malinowski, Z., Telejko, T., Hadała, B., 2012, Implementation of the finite element model with linear and Hermitian shape function to determination of the heat transfer coefficient distribution on the hot plate cooled by water spray, Proceedings of Numerical Heat Transfer 2012 International Conference, eds, Nowak, A.J., Białecki, R.A., Institute of Thermal Technology, Silesian University of Technology, Gliwice – Wrocław, 58-67.

Gołdasz, A., Malinowski, Z., Hadała, B., 2009, Study of heat balance in the rolling process of bars, Archives of Metal-lurgy and Materials, 54, 685-694.

Liu, T., Wang, B., Rubal, J., Sullivan, J.P., 2012, Correcting lateral heat conduction effect in image-based heat flux measurements as an inverse problem, Int. J. Heat and Mass Trans., 54,1244-1258.

Malinowski, Z., Telejko T., Hadała, B., Cebo-Rudnicka, A., 2012, Implementation of the axially symmetrical and three dimensional finite element models to the determi-nation of the heat transfer coefficient distribution on the hot plate surface cooled by the water spray nozzle, KEM, 504-506, 1055-1060.

Mascarenhas, N., Mudawar, I., 2010, Analytical and computa-tional methodology for modeling spray quenching of solid alloy cylinders, Int. J. Heat and Mass Trans., 53, 5871-5883.

Tacke, G., Litzke, H., and Raquest, E., 1985, Investigation into the Efficiency of Cooling System for Wide-Strip Hot Rolling Mills and Computer-Aides Control of Strip Cooling, Proceedings of a Symposium on Accelerated Cooling of Steel, eds, Sothwick, P.D., The Metallurgical Society of AIME, Pittsburgh, Pennsylvania, 35-54.

0 5 10 15 20 25Time, s

0

5000

10000

15000

20000

25000

HTC

, W/(m

2 . K)

r = 1 mmr = 2 mmr = 3 mmr = 4 mmr = 5 mmr = 6 mmr = 7 mmr = 8 mmr = 9 mmr = 10 mm

0 200 400 600Temperature, oC

0

10000

20000

30000

HTC

, W/(m

2 . K)

r = 1 mmr = 2 mmr = 3 mmr = 4 mmr = 5 mmr = 6 mmr = 7 mmr = 8 mmr = 9 mmr = 10 mm

Page 81: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 275 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

WPŁYW GEOMETRII PRÓBKI

OSIOWOSYMETRYCZNEJ NA WYZNACZANIE ROZKŁADU WSPÓŁCZYNNIKA WYMIANY CIEPŁA PODCZAS CHŁODZENIA NATRYSKIEM WODNYM

Streszczenie W pracy przedstawiono wyniki obliczeń współczynnika

wymiany ciepła wyznaczonego na podstawie badań ekspery-mentalnych. Do wyznaczenia warunku brzegowego na po-wierzchni metalu chłodzonego natryskiem wodnym wykorzy-stano rozwiązanie brzegowego odwrotnego zagadnienia prze-wodzenia ciepła. Badania eksperymentalne przeprowadzono dla próbki osiowosymetrycznej. Ze względu na specyficzną budowę czujnika wykorzystanego w badaniach, w algorytmie metody odwrotnej przetestowano dwa modele elementów skończonych opisujące geometrię próbki. Pierwszy model upraszczał geome-trię próbki do postaci „zwykłego” walca, drugi model opisywał rzeczywisty kształt próbki. W pracy testowano również dwa modele aproksymacji warunku brzegowego.

Received: September 17, 2012 Received in a revised form: October 24, 2012

Accepted: October 29, 2012

Page 82: CMMS_nr_2_2013

276 – 282 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

INVESTIGATION OF THE HEAT TRANSPORT DURING THE HOLLOW SPHERES PRODUCTION FROM THE TIN MELT

MICHAEL PETROV1,2, PAVEL PETROV2, JUERGEN BAST1, ANATOLY SHEYPAK3

1 Technical University Mining Academy of Freiberg, Institute of Machine Elements, Design and Manufacturing, Leipzigerstr. 30, 09596 Freiberg, Germany

2 Moscow State University of Mechanical Engineering (MAMI), Department „Car Body Making and Metal Forming“, B. Semenovskaya street 38, 107023 Moscow, Russia

3 Moscow State Industrial University (MSIU), Department „Electrical, Heating and Hydraulic Engineering and Energy machines“, Avtozavodskaya street 16, 115280 Moscow, Russia

*Corresponding author: [email protected]

Abstract

The present paper reveals one of the energy efficient ways of the units (hollow spheres) production for cellular struc-tures for their further application in light weight constructions, realized through the metallurgical procedure. The metal-lurgical method exhibit all known methods in intelligence, because it based on the own physical properties of the used materials and boundary conditions of the process, not involving any organically core and preparation of the powder and slurries. These small hollow spheres made from different materials could change the weight of a construction part essen-tially, used as acoustic and thermal insulation and also as protection against vibrations. They can be used as a unit cell for big parts and alone filled with an inert gas, e.g. fusion targets. Pure tin shells were produced intransient (thixotropic) state of materials by elevated temperatures (close to the melting point of the pure tin) and several simulation steps were used, to determine the preferable boundary conditions. To one of them belongs the investigation of the temperature fields dur-ing the formation process. The heat transport from the tin melt into the semi-solid tin shell influence the nucleation pro-cess so the solid wall should be formed before the gas starts to form the inner hollow space. Otherwise the semi-solid shell will be broken by the gas pressure or the inner hollow space does not occur. For these purposes the CFD- (Computa-tional Fluid Dynamic) and FEM-commercial codes such as FLUENT and Solid Works Simulation Package respectively were taken. At the end the data verification of the obtained simulation results with the measurements on the laboratory stand and theoretical calculations were carried out. Current investigation was completed by the determination of the whole temperature fields on the side surface of the form nozzle, which was obtained from a thermogram captured with the help of an infrared camera (IRC).

Key words: hollow spheres, spherical shells, tin melt, FEM, CFD, Solid Works, Fluent, thermography, heat transport

1. INTRODUCTION

Generally any production route consists of one or several production, treatment and controlling operations which are connected together through automation devices. For hollow spheres production the metallurgical technique, illustrated in figure 1, was used because of its high effectiveness and low

production costs. The property of the product chang-es through the microstructure development results from the thermal energy consumption during the primary crystallization and further heat treatment under different regimes. The main controlling opera-tions can be also implemented to obtain the spheres outer geometry, strength/ductility and inner geome-try. Once the products were sorted the structure as-

Page 83: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 277 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

sembling occurs depending on the application cases. The current paper is focused on the first two steps in this line (metal melting and hollow sphere/shell so-lidification).

Fig. 1. General process route of hollow spheres/shells produc-tion.

2. EXPERIMENTAL EQUIPMENT

Designed and manufactured labor stand for met-al hollow spheres and shells production from the tin melt was investigated. The melting, dosing and con-trolling of the solidification process occur in a spe-cial heating unit, which was made in a „sandwich” like assembly from copper and aluminum plates, tempered by heating rods. It allows adjusting the temperature in a narrow range and performing the nozzle tempering with a high precision.

The whole metal melting process runs in three stages: 1) preparing the metallic melt (optionally: alloying, degassing, protecting gas against oxidiza-tion procedure); 2) switch on the heating devices (furnace, heating unit); 3) process initiation: forming gas is fed to the tin melt through the gas needle and forms the spherical shells at the nozzle. Further de-scription of the equipment could be found elsewhere (Petrov & Bast, 2010; Petrov, 2012).

3. NUMERICAL SIMULATION

Although the similar production technique was discussed by Kendall (1981), Dorogotowcev & Merkulyev (1989) several differences and resulted setups of the heat transport problems were not still numerical investigated. Also many fundamental aspects on the heat transport given by e.g. Baehr & Stephan (2006) should to be coupled to the manufac-turing route and equipment.

3.1. Model preparation

To perform the numerical simulation the

CAD-models of heating devices inclusive cruci-ble were optimized before meshing (fasteners connections were closed, sharply corners were rounded and the fiber thermal insulation was proposed as a material with homogeneous prop-erties). To obtain adequate results different size of the mesh elements were applied: the biggest of 0,9 mm for the nozzle and 22,2 mm for the heating furnace with the common side ratio of volume tetraeder elements of 1,5. The simulated hollow sphere has a transient heating bridge, which connected the sphere with the main tin melt volume in the nozzle area as it is shown in figure 2d. It is expected that an additional heat amount will transferred into the spherical shell. It was assumed that the heating bridge elevates the temperature in the spherical shell, but the temperature difference stays the same. The life-time of the bridge corresponds to the formation time of a single shell.

3.2. Temperature distribution in the system

During the simulation setup of the temperature

distribution in the whole system (furnace – heating plates – noz-zle) three main heat transport mechanisms (thermal conductivi-ty, convection and radiation) were activated. To enable the simulation the following bounda-ry conditions were assigned and are presented in the table 1. The simulation results delivered a homogeneous temperature distri-bution in the crucible, shown in figure 3.

Fig. 2. Prepared model for simulation (a), model of the crucible (b) with the tin melt vol-ume(c) and hollow sphere of 3 mm in diameter and the wall thickness of 0,5 mm (d).

Page 84: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 278 –

Table 1. Boundary conditions in the simulated system.

Element Heat source

Furnace initial temperature on the refractory lining

Heating plates initial temperature on each heating rod or capacity per rod

Inductor initial temperature on the refractory lining

3.3. Temperature distribution in the heating plates (mesolevel)

The simulation of the heat transfer between the

fluids and inside the heating system was investigated and represents the mesolevel of the system. The goal was to determine the expected temperature fields on the microlevel, i.e. at the nozzle. The heating unit is a „sandwich” like assembly and consists of an upper and lower blocks, produced from pure copper and cast aluminum alloy. In the aluminum upper and lower blocks the gravure of the nozzle was milled. After that the plates were mounted together. Because of the fact, that this heating unit will be placed in the space with a variable environmental temperature, the temperature distribution in the plates was calculated theoretically, after that compared with the simulation results and validated with the help of IRC, according the route in figure 4.

Theoretical calculation was based on the total capacity of the heating rods. The equation (1) con-

nects the volume and thermal material properties of the plates (Petrov, 2012).

t

cmt

cmP AlAlCuCu

, (1)

where cCu and cAl – specific heat capacity of copper

and cast aluminum alloy; ϑ – temperature differ-ence; mCu and mAl – mass of the copper and aluminium plates.

From this analysis it was stated, that the thermal re-sistance (temperature drop or heat loss) due to plates stacks and heat loss due to convec-tion and a little amount of the radiation does not exceed 5%.

To perform the simulation the mesh parameters were defined as follows: volume

tetraeder elements, the biggest element of 10,2 mm, side ratio of 1,5. As a merit of the numerical simula-tion accuracy the duration of the heating stage was taken. The results from the transient heat transport simulation were compared with the temperature on the heating rod, placed in the upper copper plate, obtained with the help of thermocouple Ni-CrNi (type K). Giving the heat energy from the heating rods to the colder parts the main amount of it riches the nozzle. It was stated, that the amount of the transported heat energy is constant through a period of time and enough to guarantee the forming process discontinuity. At the same time the capacity of the heating rods determine also the heating time. Several in situ measurements were carried out to justify the proper choice of the boundary conditions. So the additional heat transfer mechanism, namely convec-tion reduce the calculation error (compared to the measured value from the table 2) up to 8,4%.

Fig. 3. Temperature fields in the longitudinal cross section of the nozzle (a) and heating equip-ment (b).

Fig. 4. Construction stages of the heating plates.

Page 85: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 279 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Table 2. Heating time, obtained for different simulation cases

Heating stage, °C

Time, min : sec Boundary conditions Experiment Simulation

from 0 up to 257 16:55

14:30 TR, HR + HC + T

15:30 TR, HR + HC + T + C

where TR – transient heating process, HR – heat radiation, HC – heat capacity, T – constant temperature, C – convection.

Measurements with IRC has shown, that the

temperature difference between the heating plates and the melt at the nozzle orifice stays by 16°C, during the simulation and theoretical investigations followed to a temperature of 16 – 25°C and 13 – 24°C respectively, where the greater value corre-sponds the greater heat loss in the system.

3.4. Temperature distribution at the nozzle

(microlevel) An obtained temperature distribution for me-

solevel could be applied also for a microlevel. Be-cause of a new fluid phase (forming gas) the new problem had to be defined. The temperature can be influenced by the forming gas expansion due to the differences of the cross sections of the pipeline and the feeding needle. The rash temperature drops of millisecond duration results strong tin melt under-cooling. This undesirable effect leads to the process discontinuity due to metal solidification between two periods of hollow spheres formation. Through the simulation it could be shown, that even the test chamber temperature of 232°C (melting point of the pure tin) by investigated gas flow rates of an average value of 750 liter per hour does not allow to elimi-nate the undercooling effect on the displacement of 1,5 mm from the needle top of the nozzle presented in figure 5a.

Moreover once the temperature influence was cleared, the gas distribution in the tin melt was still unknown. To carry out the case study a special CFD-simulation and simple verification test were developed. The principle of the test is the periodic gas injection (frequency of 1 Hz) into the tin melt and measurement of the height of the invader cone zone. The aim of the test is to find out the cone height from the simulation results which corre-sponds to the real cone height from the verification test under the same boundary conditions and for the same time point, showed in figure 5b and 5c. The theoretical problem description can be found by Bohl (1991), Loycyanskiy (2003), Sheypak (2006) and other fundamentalists.

Because of the fact that the forming gas expands into the certain melt volume the expected tempera-ture in the cross section could be calculated from the equation (2) as:

1

,

i

oitho p

pTT , (2)

whereV

p

cc

– isentropen exponent, cp(v) – isobare

(isochore) specific heat capacity and i

o

pp

– critical

pressure ratio (po – pressure value after gas expan-sion and pi – pressure value in the pipeline); T– tem-perature (index „i“ for inside, „o“ for outside and „th“ for theoretical)

The true temperature due to gas expansion was calculated from the equation (3), under the assump-tion, that the gas velocity exceeds the value of the

velocity coefficient, defined as 1 with ξ′ as a drag coefficient, and together with the energy equation:

Fig. 5. Phase and temperature distribution during the forming gas injection (a – numerical simulation of the temperature distribution after the gas expansion in the hot chamber, obtained in Solid Works; b – gas distribution in the tin melt, obtained with high speed cam-era, 100 fps;c – simulation of the gas distribution in the tin melt, obtained in FLUENT).

Page 86: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 280 –

thoiio TTTT ,2 , (3)

So for an environmental temperature (ET) of 17°C the temperature differences stay by 7°C for experiment and 6°C for numerical simulation. Tak-ing the height of the involved cone from the figure 5b of 6,5 mm on the diagram temperature-distance, shown in figure 6a, two areas can be pointed out: up to the distance of 6,5 mm and over this value. The expected temperature drop up to 6 – 7°C by initial ET of 17°C gives the distance of 5,5 mm from the nozzle. For an initial ET of 232°C the same distance will give the temperature decrease up to 45°C. The area above the distance of 6,5 mm is out of the gas distribution zone and not intended into the investiga-tion.

3.5. Temperature distribution in the sphere’s wall (microlevel)

The temperature distribution in the spherical

shell occurs also in micro level of the investigated system and can be calculated on hand the simple equation (4) with sphere radius as argument. Follow-ing the equation (4) the thicker walls propose the smaller temperature derivation from the linearity in

figure 6c as thinner walls in figure 6b. Self the line for the thin wall has a greater slope as the line for the thick wall that means that the temperature gradi-ent for thin walls under the same boundary condi-tions should be greater to transport the heat amount from the melt to activate the solidification process.

o

i

i

oiith

rrrr

TTTrTT

1

1)( , (4)

where T – temperature (index „i“ for inner sphere surface, „o“ for outer sphere surface and „th“ for theoretical); r – arbitrary radius of the sphere.

The hollow spheres in figure 7 were previously meshed with volume tetraeders’ elements with the biggest size of 1 mm and side ratio of 1,5. After the simulation the obtained temperatures from the mid-dle radius of the shell were compared with the calcu-lated from the equation (4).

Fig. 6. Temperature fields: temperature – distance diagram for determining undercooling effect (a); temperature distribution in the shell’s wall with different radius and thickness (b and c) and slope of the linear temperature distribution curves (d).

Page 87: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 281 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

3.6. Solidification process

From the simulation results presented in figure 8 it is clearly seen, that the solidification front moves from the top of the spherical shell to the nozzle orifice and the temperature differ-ence does not exceed 0,2°C.

ISO-surfaces also show, that at the same time point the temperature in the spherical shell distrib-utes very fast, differ from the temperatures on the other surfaces not more than on 1°C, presented in figures 8b and 8c.

The necessary solidification rate (SR) of a solidi-fication process can be calculated from the equation (5):

caltSR

, (5)

where tcal – calculation time; ϑ – temperature dif-ference on the outer and inner surfaces of the hollow sphere.

From the balance of heat fluxes described by Gottstein (2004) the heat of solidification can be obtained on the base of the equation (6):

сS

LL

CC dt

dxhdxdT

dxdT , (6)

where T – temperature, hS – heat of solidification; λC and λL – thermal conductivity of the crystal and melt respectively; x – distance (wall thickness), tC – time of the crystal growth.

4. CONCLUSIONS AND OUTLOOKS

In the present paper the heat transport during the hollow spheres production from the tin melt was investigated in the numerical, theoretical and exper-imental way. The results go back on the micro-, meso- and macrolevel numerical problems of the investigated system, represented by simulation of the temperature distribution in heating system, heating plates and nozzle. Hollow spheres/shells could be produced directly from the tin melt if the boundary conditions were properly defined. The temperature fields in the sphere’s wall due to small dimension of the sphere changes very quickly and need to be in-vestigated separately with finer finite elements, be-cause the temperature changes here during the whole processing time does not exceed 0,2°C. From the figures 6b and 6c the importance of the wall thick-ness in the heat transport problem is obtained. For thinner walls the temperature distribution shows greater deviation from a linear characteristic as for thicker walls, there temperature could be better de-scribed by a linear function. From the equation (4) follows, that non-uniform wall thickness of the shell

Fig. 7. Simulation of the temperature fields in the spherical shell at the time point of 20 seconds (a), 200 seconds (b), 2000 seconds (c).

Fig. 8. Temperature fields on the outer surface of the spherical shell in 1000 seconds (a); ISO-surfaces in 900 seconds of calculation time for a temperature distribution from 313°C and higher (b) and from 314°C and higher (c).

Page 88: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 282 –

will cause different temperatures. Consequently the cooling rate will differ and the microstructure devel-opment in the wall will vary. Also the optical sur-face quality of the produced microsphere varies be-tween rough for small cooling rates and smooth shiny for greater cooling rates.

Also a problem of undercooling effect due to expansion of not preheated forming gas at the nozzle was formulated and investigated. The prediction of the undercooling can be made on the hand of tem-perature-distance diagram in figure 6a and calculat-ed from the equation (5) and (6). Both the infor-mation about the temperature distribution in the metallic melt and the gas distribution were used by the design and construction of the nozzle: nozzle placing, orifice diameter, temperature fields around the nozzle etc. These essential parts of the work make possible to increase the production capacity of the laboratory equipment.

The minimal diameter could be obtained from the Young-Laplace equation for hollow micro-spheres up to 1 mm in diameter and for bigger mi-crospheres up to 3 – 4 mm in diameter the technique given by Petrov (2012) can be applied. The results can be used by the further microstructure prediction by function with two arguments, namely the outer shell diameter and wall thickness.

Acknowledgement. This paper was prepared in

scope of the state contract 16.740.11.0744, fund-ed by Ministry of Education and Science of the Rus-sian Federation.

REFERENCES

Baehr, H. D., Stephan, K., 2006, Wärme- und Stoffübertragung, 5th edition, Springer Verlag, Berlin, Heidelberg, New York (in German).

Bohl, W., 1991, Technische Strömungslehre, 9th edition, Vogel Buchverlag, Würzburg (in German).

Dorogotowcev, W., Merkulyev, J., 1989, The methods of the hollow microspheres production, Physical Lebedev In-stitute Publishing, Moscow (in Russian).

Gottstein, G., 2004, Physical foundations of materials science, Springer Verlag, Berlin, Heidelberg, New York.

Kendall, J.M., 1981, Hydrodynamic performance of an annular liquid jet: production of spherical shells, Proceedings of the second international colloquium on drops and bub-bles, eds, LeCroisette, D. H., NASA JPL, Pasadena, 79-87.

Loycyanskiy, L.G., 2003, Mechanic of fluids and gases, 7th edition, Moscow (in Russian)

Petrov, M., Bast, J., 2010, Entwicklung einer Anlage zur Hohl-kugelherstellung, Scientific reports on resource issues, Proceedings of the 61. BHT (Berg- und Hüttenmänni-sche Tagung), eds, Drebenstedt, C., TU Bergakademie Freiberg, Freiberg, Volume 3, 343-350 (in German).

Petrov, M., 2012, Untersuchungen zur Hohlkugel- und Schalen-herstellung direkt aus der metallischen Schmelze zu ih-rer Anwendung in Leichtbaukomponenten, PhD thesis, Freiberg (in German).

Sheypak, A.A., 2006, Hydraulic and hydraulic drive systems, 5th edition, MSIU Publishing, Moscow (in Russian).

ZAGADNIENIE TRANSPORTU CIEPŁA PODCZAS PRODUKCJI PUSTYCH KUL Z CYNY

Streszczenie Praca przedstawia jedną z energooszczędnych metod pro-

dukcji zespołów struktur komórkowych (pustych kul) stosowa-nych w lekkich konstrukcjach. Opisany proces metalurgiczny produkcji kul wykorzystuje fizyczne własności zastosowanych materiałów oraz warunki brzegowe procesu bez wprowadzania proszków czy zawiesin. Puste kulki o małych średnicach wyko-nane z różnych materiałów mogą znacząco zmniejszyć wagę elementów konstrukcyjnych wykorzystywanych jako izolatory akustyczne i termiczne, a także jako ochrona przed wibracjami. Można je stosować jako komórki jednostkowe dla większych elementów lub jako elementy samodzielne wypełnione gazem obojętnym. Powłoki z czystej cyny produkowane są w tempera-turze bliskiej temperatury topnienia materiału (w stanie tiksotro-powym). W ramach pracy wykonano szereg symulacji kompute-rowych, które umożliwiły określenie właściwych warunków brzegowych procesu produkcji. Badano między innymi rozkład temperatury podczas procesu produkcji. Transport ciepła w procesie formowania powłok z cyny od stanu płynnego do stanu pół-stałego ma wpływ na proces zarodkowania, dlatego ścianki będące w stanie stałym powinny być formowane zanim gaz rozpocznie kształtowanie wnętrza powłoki. W przeciwnym wypadku pół-stała powłoka pęknie pod wpływem ciśnienia gazu lub w ogóle nie powstanie. W pracy do symulacji wykorzystano komercyjne programy FLUENT oraz pakiet SolidWorks. Wyni-ki symulacji zostały porównane z wynikami doświadczeń oraz obliczeń teoretycznych. Ponadto badania uzupełniono o wyzna-czenie pola temperatury na powierzchni dyszy formującej dzięki termogramowi zarejestrowanemu kamerą na podczerwień (IRC).

Received: September 20, 2012 Received in a revised form: November 4, 2012

Accepted: November 21, 2012

Page 89: CMMS_nr_2_2013

283 – 288 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

AN EXPERIMENTAL STUDY OF MATERIAL FLOW AND SURFACE QUALITY USING IMAGE PROCESSING

IN THE HYDRAULIC BULGE TEST

SŁAWOMIR ŚWIŁŁO

Faculty of Production Engineering, Warsaw University of Technology, Warsaw/Poland *Corresponding author: [email protected]

Abstract

The paper presents a method for the surface shape and strain measurement applied in the determination of metal flow and product quality. Accurate determination of these characteristics in the sheet metal forming operation is extremely im-portant, especially in automotive applications. However, the sheet metal forming is a very complex manufacturing pro-cess, and its success depends on many factors. This involves a number of tests that should be carried out to find optimal, yet cost-effective solutions. In this study the author discusses the investigations that are focused on better understanding of the strain values and their distribution in a product, and checking if they do not exceed certain limit resulting in the loss of stability. The hydraulic bulge test was identified as a method most applicable in these investigations, where both theoretical and experimental analysis was conducted.

First, the presented solution in the field of reconstruction of three-dimensional (3D) objects significantly expands the capabilities of the analysis, compared with the solutions existing so far. Second, a new method for the 3-dimensional geometry and strain measurement based on laser scanning technique is presented. Next, the image acquisition process and digital image correlation (DIC) are presented to recognize and analyze the objects taken from camera. The 3D shape and strain analysis presented in this paper offers a valuable tool in the metal products quality control, along with a complete testing equipment for maximum strain calculation just before cracking. The computer measurement system is directly connected to the hydraulic bulge test ap-paratus, thus providing fast and accurate results for material testing and process analysis. Key words: strain analysis, sheet metal forming, hydraulic bulge test

1. INTRODUCTION

Methods for the measurement of surface shape or deformation (displacement) generate solutions that are currently and generally applied in various scientific fields. Especially, three methods are wide-ly used by different authors. The first of them is the projection method described in detail by Swillo and Jaroszewicz (2001). The second method is based on the analysis of surface patterns (Swillo, 2001; Koga & Murakawa, 1996; Sirkis, 1990). The third method is laser scanning. In this group numerous solutions

are available, and their classification depends on the technique by which the displacements are measured. However, all the contemporary methods – 3D recon-struction of objects or measurements of defor-mations - are based on a system of image recording by means of a CCD camera. These techniques seem to be very useful in the field of metal forming be-cause they are very effective when strain values have to be determined by the analysis of surface patterns. The method commonly used in studies of the kinematics of the sheet metal forming process is

Page 90: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 284 –

stretching of the sheet metal surface with a metal punch (figure 1a). In the present study, to test the plastic forming process, a method of bulging with fluid under pressure the sheet metal disc clamped at the edges has been applied (figure 1b). In this opera-tion, a uniform biaxial stretching occurs, due to which the processed element assumes the shape of a spherical cap. The proposed method of the strain measurement in a bulging process is based on the numerical image processing and three-dimensional object reconstruction. In the last several years, vari-ous technical improvements in the method of sheet metal pattern recognition took place, enabling analy-sis of the local state of strain in industrial sheet met-al forming. At the same time, during laboratory tests, many authors have been evaluating the sheet metal quality based on the forming limit curves (FLC), originally proposed by Marciniak (1961), and Marciniak & Kuczyński (1967).

Fig. 1. Schematics (section view) of the sheet metal forming: a) using metal punch, b) using hydro-bulging.

Currently, we can find, among others, methods that serve the determination of forming limit curves which, combined with calculations of the defor-mation occurring in the examined products, provide comprehensive information about the state of the material (for example: Swillo et al., 2000). This solution is based on the use of a metal punch operat-ing on samples with different geometry (figure 1a). These methods are based on either correlation or analysis of the geometry of regular coordination grids. As complementary to the created solutions of automated strain measurement, studies are carried out to detect the crack onset. The, used for many years, method of grids coupled with the image pro-cessing has gained numerous solutions. All these solutions are based on the identification of grids applied to the surface of sheet metal. The grids can

have a regular or stochastic shape. One of such solu-tions is a system that enables automatic measure-ment of lines perpendicular to each other and lying at a distance of 1-5 mm from each other, depending on the nature of the measurement. Studies on the design of systems for automatic strain analysis are carried out by Vialux Company (Feldmann & Schatz, 2006; Feldmann, 2007). The Company has developed a system called AutoGrid used for the analysis of deformation during bulge test. The pos-sibility of plotting the forming limit curves also ena-bles predicting what are the chances for further plas-tic forming of the sheet metal. Owing to this charac-teristic of the deformation limit of any sheet materi-al, the collected information allows us to determine if the strain values in areas of the largest product deformation are approaching the limit values.

Another option is a system for the analysis of de-formation based on the bulging test using a steel punch (Liewald & Schleich, 2010). The process is performed on a specially designed testing machine, where appropriate optical system with two cameras can record the run of the forming process. Image analysis of the deformed sample area is done by a commercial ARAMIS system made by GOM (Hi-jazi et al., 2004). The stand with this device is capa-ble of performing a fully automated control of the bulging process using a metal punch The solution uses a digital image correlation technique based on stochastic grids, which, in the authors opinion, are much more efficient as regards the accuracy of the obtained results than the regular grids.

It is believed that, by careful designing of pro-cess operations, most of the final product defects and limitations can be eliminated, minimized, or at least controlled. According to the current experimental investigation, all available information to predict the quality of the final products is not sufficient. There-fore the goal is to develop a major parameter that can be used in an assessment of the quality of auto-motive parts after sheet metal forming.

2. EXPERIMENTAL APPARATUS

Solutions presented in this paper are referring to the three dimensional cases. On the example of bulg-ing test, a new possibility in the field of image pro-cessing techniques is demonstrated. These tech-niques seem to be very useful for the metal forming analysis because they are very effective when strain values have to be determined by the analysis of sto-chastic surface pattern. A specially designed exper-

Page 91: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 285 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

imental apparatus for bulging process analysis was assembled (figure 2a). The developed solution com-prises a computerized, fully automatic, motorized test stand equipped with optical and vision systems to acquire the data. In the described study, tests of the plastic forming were performed using a method of bulging the sheet metal discs (fixed at the edges) with fluid under pressure. In this operation, a biaxial uniform stretching occurs, producing specimens where the dome is spherical in shape. In this exam-ple of the sheet metal forming process, numerous solutions are possible as regards the description of the process kinematics and study of the test condi-tions under which the loss of stability occurs due to the absence of friction on the tool - die contact sur-face (Marciniak, 1961). The use of the stand allows running two types of the measurements: basic and complex. The first group of measurements includes recording the run of the plastic forming process in terms of pressure and displacement, while second group includes measurements of the process kine-matics and of the shape of the bulged samples. Full description of the bulging process should enable further materials research and development of pro-cess control mechanisms. The elements of such con-trol can include, among others, an option for the automatic monitoring of the process run and the possibility of its interruption at a strictly determined stage of deformation.

The central measurement system in the test stand is an optical system, whose task is to allow a 3D reconstruction of the sheet metal formed (figure 2b). The proposed mathematical model to solve this problem has been based on the author's own research (Swillo et al., 2012). Owing to some simplifications

proposed in the measurement model, each laser-generated section is identified by one camera only, while the assumed axisymmetrical shape of the ex-amined object allows its 3D reconstruction. To ob-tain rapid (in real-time), continuous (pixel-based), high accuracy (sub-pixel) verification of the geome-try of the distorted elements, it was necessary to use measurements based on vision control. The, pro-posed for studies of the kinematics of the shaped objects, method of outline reconstruction using laser light bases on a temporary outline searched for the examined element subjected to deformation. In addi-tion to the measured contour, a front view of the object (with stochastic grid) is recorded, which ena-bles the reconstruction of a 3D image of the meas-ured sample and determination of the size of defor-mation.

3. ANALYTICAL MODELING OF THE BULGING PROCESS

The aim of the investigations of the sheet metal forming process is to deepen our knowledge about the factors (phenomena) that restrict this process, during which the material undergoes plastic defor-mation by drawing, extrusion or redrawing. In this process, the formed object is obtained by mapping its shape on a sheet of metal using a punch and a die (figure 1b). The deformation in the forming process

cannot reach any arbitrarily large values, because some limiting phenomena will occur at a certain stage of the process, disturbing or disrupting even its further course. The main problems include strain localization, cracking, or curling of the sheet metal. The state of biaxial stretching occurring in many

Fig. 2. Schematics of the experimental apparatus for hydro-bulging process: a) the testing stand, b) an optical system to control the bulging process.

Page 92: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 286 –

forming operations makes the bulge test very useful in this analysis (mainly due to the mere nature of the state of stress). The run of this process is usually considered in terms of the strain occurring in a per-fectly flexible thin membrane. According to a math-ematical formula relevant to this case, the product of the yield stress multiplied by the actual thickness of the membrane (p*g) is constant (Marciniak, 1961). The deformation occurring in the center of the dome follows the relationship given below:

2 1 (1)

where: h is the actual dome high and a is the blank radius. In the pure biaxial case where the bulge is a perfect bowl (figure 1), the extreme values of the internal pressure p is given by:

(2)

where: R is the actual dome radius and g is the thickness at the top of the dome.

Since we have the biaxial uniform stretching, the relationship (2) can be solved by differentiation and simplified to the final form of:

(3)

This relationship allows us to determine in a graphical manner the strain value at which the pressure p will reach its maximum for the known hardening curve. Figure 3 shows the results of calcu-lations, where the experimental results obtained on a stand for the test forming using oil were compared with computations made for the membrane theory. The high consistency of the obtained results con-firms that the theoretical solutions used to determine the polar limit strain values are correct theoretical assumptions for this area of the metal forming tech-nology.

4. STRAIN MEASUREMENTS ON STOCHASTIC GRIDS

As a result of the performed calculations, the co-ordinates for the projection of nodal points in a two-dimensional space were obtained. Using information from the second camera, the profile of the bulged sample was determined (according to previous de-

Fig. 3. A comparison of experimental and analytical results of the bulging test: a) strain distribution, b) stress-strain hardening curve.

Fig. 4. Displacement and strain calculation: a) global strain calculation using DIC, b) vision inspection (crack localization), c) compar-ison for the local strain calculation (micro-strain results) and vision inspection.

Page 93: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 287 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

scription) and, in accordance with the proposed 3D reconstruction algorithm, the third coordinate was specified. Strain measurements were carried out according to the described method of calculations for the main deformation direction and directional displacement gradient measurements (figure 4a).

As an operation complementary to the strain cal-culation, micro-deformations in the zone of the crack onset during bulging process were determined. The accurate measurement of the forming limit is one of the major issues in plotting of the forming limit curves. The method of image correlation used for this purpose is a highly efficient tool for an accu-rate measurement of these parameters. The method consists in adding up two values of the deformation. The first value of the deformation is calculated as a result of the identification of the position (image), for which the strain localization occurs (figure 4a). The second value results from the determination of strain that occurs in the crack onset zone (figure 4b). Figure 4c shown the comparison of the calculated strain localization and vision inspection.

Finally, based on experimental calculations, the hardening curve for DC04 was plotted, comparing the results with the measurements by the method of uniaxial stretching and with information about the equation of a curve given in the literature (figure 5a). The large scatter in the experimental measure-ments is due to the lack of a more refined technique for taking precise strain measurements by the meth-od of correlation. Large number of images generated during measurements is an obstacle in precise de-termination of the deformation history, which is a key factor in the calculation of plastic properties.

5. SUMMARY

The method proposed by the author consists in the identification of an outline of the examined ax-isymmetrical object, extended to the identification of three-dimensional objects with the possibility of deformation measurements. The use of this approach in the study of the kinematics of the forming process is a solution that has required the development of a mathematical model, the introduction of a number of assumptions to the design of a test stand using this method, as well as the development of methods to process images recorded during plastic forming. The method commonly used in the study of the kin-ematics of the sheet metal forming is stretching of the sheet metal surface with a metal punch. In this study, the test method used for plastic forming has been bulging with a fluid under pressure of the sheet metal discs, fixed on the edges. In this operation, a biaxial, uniform stretching occurs, resulting in the formation of objects in the shape of a spherical cap. The described example of the process of the sheet

metal forming allows obtaining a number of solu-tions as regards the description of the process kine-matics and study of the test conditions under which the loss of stability occurs due to the absence of friction on the tool - die contact surface.

Acknowledgements. Scientific work financed

as a research project from funds for science in the years 2009-2011 (Project no. N N508 390637).

REFERENCES

Feldmann, P., Schatz, M., 2006, Effective Evaluation of FLC-Tests with the optical in-process strain analysis system AutoGrid, Proc. Conf. FLC, ed, Hora P., Zurich, 69-73.

Fig. 5. a) comparison of experimental results with the hardening curve: b) bulge-samples.

Page 94: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 288 –

Feldmann P., 2007, Application of strain analysis system AUTOGRID for evaluation of formability tests and for strain analysis on deformed parts, Proc. Conf. Interna-tional Deep Drawing Research Group, ed, Tisza M., Gyor, 483–490.

Hijazi, A., Yardi N., Madhavan V., 2004, Determination of forming limit curves using 3D digital image correlation and in-situ observation, Proc. Conf. SAMPE, Long Beach, 791-803.

Koga, N. and Murakawa, M., 1996, Application of visioplastici-ty to experimental analysis of shearing phenomena, Proc. Conf. Advanced Technology of Plasticity, Vol. II, ed, T. Altan, Columbus, 571-574.

Liewald, M., Schleich R., 2010, Development of an Anisotropic Failure Criterion for Characterising the Influence of Curvature on Forming Limits of Aluminium Sheet Metal Alloys, Int. Journal of Material Forming, 3,1175-1178.

Marciniak, Z., Kuczyński, K., 1967, Limits strains in the pro-cesses of stretch-forming sheet metal, Int. Journal of Mechanics Science, 9, 609-612.

Marciniak Z., 1961, Influence of the sign change of the load on the strain hardening curve of a copper test piece subject to torsion, Archives of Mechanics, 13, 743-752 (in Polish).

Sirkis, S., 1990, System response to automated grid methods. Opt. Eng., vol. 29, 1485-1493

Swillo, S., and Jaroszewicz, L.R., 2001, Automatic shape meas-urement on base of static fiber-optic fringe projection method. Proc. Conf. Engineering Design & Automation, eds, Parsaei, H.R., Gen, M., Leep, H.R., Wong J.P., Las Vegas, 476-481.

Swillo, S., 2001, Automatic of strain measurement by using image processing. Proc. Conf. Engineering Design & Automation, eds, Parsaei, H.R., Gen, M., Leep, H.R., Wong J.P. Las Vegas, 272-277.

Świłło, S., Kocańda, A. and Piela, A., 2000, Determination of the forming limit curve by using stereo image pro-cessing. Proc. Conf. Metal Forming 2000, eds, Pietrzyk M., Kusiak J. & Majta J. and Hartley P., & Pillinger I., Kraków, 545-550.

Świłło, S., Czyżewski, P., Lisok, J., 2012, An experimental study for hydro-bulging process using advanced com-puter technique, Proc. Conf. Metal Forming 2012, eds, Kusiak J., Majta J., Szeliga D. and Weinheim, Kraków, 1411-1414.

ANALIZA DOŚWIADCZALNA PŁYNIĘCIA

MATERIAŁU I KONTROLA JAKOŚCI POWIERZCHNI W PROCESIE WYBRZUSZANIA

Z WYKORZYSTANIEM OBRÓBKI OBRAZU

Streszczenie W artykule przedstawiono metodę pomiaru geometrii i od-

kształceń w odniesieniu do pól przemieszczeń i końcowej jakości produktu. Dokładna charakterystyka tych wielkości w procesach kształtowania blach jest niezwykle ważna, szcze-gólnie w przemyśle samochodowym. Jakkolwiek jednak jest to proces niezwykle trudny i uzależniony od wielu czynników. Dlatego wykorzystywanych jest wiele testów, w celu określenia charakterystyki materiału. W przedstawionym opracowaniu autor koncentruje się na lepszym zrozumieniu rozkładu od-kształcenia i jego koncentracji prowadzącej do utraty stateczno-ści. Zaproponowana został metoda hydro-wybrzuszania, gdzie przedstawiono dwa rozwiązania doświadczalne i analityczne. W pierwszej części przedstawiono rozwiązania w zakresie rekonstrukcji trójwymiarowej, które znacząco rozszerzają moż-liwości w stosunku do tradycyjnych metod optycznych analizy kształtu. Następnie zaprezentowano wyniki analizy obrazu z wykorzystaniem korelacji, wskazując na znaczne ułatwienia w rozwiązywaniu problemów wyznaczania odkształceń i pomia-ru kształtu dla przedstawionego przykładu, jak również dla identyfikacji miejsc potencjalnych pęknięć. Dzięki sprzężeniu urządzenia testującego z układem komputerowym możliwe jest szybkie i precyzyjne przedstawienie końcowych wyników pom-iarów własności materiału.

Received: October 15, 2012 Received in a revised form: November 21, 2012

Accepted: December 5, 2012

Page 95: CMMS_nr_2_2013

289 – 294 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

SELECTION OF SIGNIFICANT VISUAL FEATURES FOR CLASSIFICATION OF SCALES USING BOOSTING TREES MODEL

SZYMON LECHWAR

ArcelorMittal Poland, Hot Rolling Mill in Kraków, ul. Ujastek 1, 30-969 Kraków *Corresponding author: [email protected]

Abstract

The subject of this paper is to design and implement an efficient model for various kinds of scales recognition at the Hot Rolling Mill (HRM) in Kraków. Subsequently, the model and its most important variables can be used to describe and distinguish different kinds of scales. At the moment an extensive knowledge regarding the reasons of scale occur-rence is gathered. Nevertheless, the real challenges nowadays seem to be measuring techniques of those phenomena, as well as reliable online classification.

This paper describes the basics of automatic surface inspection system (ASIS) which was used as a source of entry data, as well as the method of interpretation of the data obtained from this system. The ASIS provided numerous features describing single image, which was considered as a defect. The objective of this paper was to supply information regard-ing the most important visual attributes, which will be subsequently used in building reliable classifier for scale recogni-tion. It was done by use of data mining techniques. The result was a set of measurement data, stored in online production database.

However, some kinds of scales could not be recognized efficiently. The reason behind that was the lack of unique features, which could distinguish them from the other defects. This problem will be solved in following studies by creat-ing offline post processing rules. Key words: automatic surface inspection system, boosting trees, data mining, hot rolling mil

1. INTRODUCTION

In today's industry, global competition and rising customer’s requirements are becoming increasingly important in production of high quality products. At the same time, each plant puts strong emphasis on the automation of its process and the maximum costs reduction. Combination of these factors often proves to be very difficult or even impossible to achieve with the use of common production methods.

Steel industry is no exception to that rule. Direct customers and subsequent treatment processes (ex. cold rolling process) requires production of higher quality steel while reducing costs. One way to achieve this goal is the application of automatic sur-face inspection of rolled sheets (ASIS - Automatic

Surface Inspection System). The purpose of the sys-tem is to take pictures of produced material, to de-tect local variations in contrast on its surface and to classify individual irregularities. Each picture taken by the system is digital and converted into grey scale pixels. In this way a map can be obtain, which sup-plies information regarding defective material in terms of various defects and pseudo defects. This type of system brings a significant reduction of visu-al inspections performed by human inspector.

To make it possible to build reliable classifier of surface defects, ASIS needs to be taught. Person, who is an expert in certain classification, should create sets of defects that will be used to “teach” software provided by the manufacturer. In an ordi-

Page 96: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 290 –

nary approach, the end user (expert in the field) makes selection of images that he believes belong to specific classes of defects and arrange them in the program supplied by the manufacturer. On this basis, the software creates models, examines the character-istics of images and selects the rules by which it will be possible to classify newly emerging images. This approach cuts user's knowledge about visual charac-teristics of the images. In this approach it is impos-sible to distinguish specific types of defects using results given by the ASIS. Furthermore, user cannot build additional rules in third party software to assist classification of similar defect classes.

In this study it was decided to deal with this is-sue in more detailed manner. The aim of the work was to find visual characteristics of defects that best serve to build a model (Webb, 2002; Bakker et al., 2006). The research was concentrated on scale de-fects produced at hot rolling mill in Kraków. It was decided to manually select a set of reference defects, analyse visual features of each defect class, which has an influence on construction of a scale classifi-cation model and decides which features could be used in future work to build a reliable scale classifi-er.

2. DISTRIBUTION OF SCALES AT HOT ROLLING MILL IN KRAKÓW

ASIS is monitoring whole coils production at the hot rolling mill, returning as a result map of irregu-larities in the contrast detected on the surface of hot strip. The number of possible defects that might be produced during production varies depending on steel grade, strip thickness and technological mill settings. At most, about 30 different real defects could occur on the hot strip. Therefore, ASIS is trained to detect and classify all of them. This study focused on scale defects. Based on ArcelorMittal internal defects catalogue (Breitschuh et al., 2007) and expert knowledge it was decided to select and distinguish 10 scale classes.

Different defects occurring at hot rolling mill in Kraków production line were sorted in order to con-duct the study. Defects images were taken by ASIS from production line. From the pool of 26000 candi-date’s images some of them were isolated as real scale defects. This was followed by manual classifi-cation of images based on expert knowledge and reference materials (Melfo et al., 2006; Sun et al., 2003; Sun et al., 2004). As a result, set of 3300 scale defects were gathered and hand-classified. Not all scale classes will be presented in this paper. Classes

will be treated as a reference data based on which data analysis will be carried out.

3. SELECTION OF THE MOST RELEVANT VISUAL FEATURES OF THE SCALE DEFECTS USING DATA MINING METHODS

Creation of a rich set of reference data provided opportunity to explain visual features of scales in detail. Images, taken by ASIS, were analysed by the manufacturer software to find local variations in contrast. Such areas, called regions of interest (ROI), received a number of features that describes their visual characteristics in numeric manner. Features, which provided information regarding classification of the currently implemented classifier were re-moved, as they supply unnecessary data at this stage of study. In the end, raw data consisted of 744 varia-bles, which was passed to the further analyses.

3.1. Preliminary data analysis

The first step in data mining analysis (Hand et

al.,2001; Han & Kamber, 2006) focused on data preparation and cleaning, which had been signifi-cantly reduced due to correctness of reference data - manual classification. It does not contain any miss-ing fields or repeated observations. Variables, with variance below 10-10, were removed as they did not carry any valuable information. It was assumed that reference data does not contain any unusual values or outliers. Any transitions and transformations were not carried out during data cleaning stage.

In order to seek the most important features de-scribing categorical variable "type of scale" input data was analysed by the two data mining (Statsoft, 2006) modules: Decision trees C & RT (Classification and Re-

gression Trees) Variables selection and analysis of the causes,

which finds the best predictors for each depend-ent variable. Interactions between predictors are not taken into account. In total, 145 variables were selected for further

analysis.

3.2. Choice of the best set of features describing variable "type of scale"

It was decided to build Boosting Trees model in

order to supply information regarding the most im-portant features, that will be used in creation of reli-

Page 97: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 291 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

able classifier. Besides classification, model defined the most relevant attributes that are the case of this study. Input data was divided into learning sample and the validation sample with 80% to 20% ratio. In order to find the most efficient model, different pa-rameterisations were analysed.

First step covered testing of the model depend-ing on number of variables. Two variants were test-ed, i.e. without the use of redundant variables (those with correlation between variables exceeding 0.8) and with redundant variables. First case (figure 1) shows that the best model can be obtained for 21 variables.

Fig. 1. Correctness of the Boosting Trees model depending on number of variables – without the use of redundant variables.

The second case (figure 2) gave much more promising results at 33 variables, with 94.26% of correctness on the learning sample and 78.65% of correctness on the test sample.

Fig. 2. Correctness of the Boosting Trees model depending on number of variables – with the use of redundant variables.

Parameterisation was continued with the use 33 variables as they gave the best model at this point of study. The change of a priori probability, maximum number of nodes, maximum number of levels and minimum cardinality of descendant did not change

efficiency of the model. Only one parameter, mini-mum cardinality of node at 123, gave better model (figure 3).

Fig. 3. Correctness of the Boosting Trees model depending on number of minimum cardinality of node.

Finally, the best model reached 93.84% of cor-rectness on the learning sample and 80.34% of cor-rectness on the test sample. The rest of the parame-ters remained at default values. Table 1 contains gathered parameters of the model. Table 1. Parameters of best Boosting Trees model.

Number of variables 33

Erase of redundancy data No

Fast variables selection No

Minimum cardinality of node 123

Minimum cardinality of node of descendant 1

Maximum number of levels 10

Maximum number of nodes 13

A priori probability Equal

The benefit from construction of this model was

selection of 33 variables, which describe characteris-tics of the scale defects. Table 2 presents collected variables, along with their significance and short description.

First two items from the table 2 describe decom-position of scale on the strip. It is most often associ-ated with abnormal work of descalers, because they remove only a portion of scale covering the slab. Nevertheless, in this paper, proper work of descalers is only the cause of defects that should be eliminat-ed. The core of the work was to find visual features that could be used in creation of efficient scale clas-sifier. Therefore, position of defects throughout the strip will be ignored.

Page 98: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 292 –

4. RESULTS - DISTINCTION OF SCALES THROUGHOUT VISUAL FEATURES OF ITS IMAGES

Most relevant features, that had been isolated from a wide variety of attributes given by ASIS, were used to distinguish scale classes. Subsequently, these features along with sufficient logic will sup-port automatic classifier (build by default within supplier software). It is possible to create additional classification rules both in C++ language and T-SQL stored procedures (ASIS database).

One example of feature, that, together with necessary logic, could be implemented as classification support is “horizontal to vertical difference of gradient range”. It describes the numerical differences between horizontal and vertical gradient ranges (in grey scale) of the defect. Figure 4 shows decomposition of the feature for line scale, which originates from the first stand of finishing train at the end of rolling campaign.

Figure 5 shows decomposition of the feature for single strip scale - defect formed due to malfunction of the descalers.

The feature could support final classification decision between the-se two classes. Although, straight-forward use of the attribute to clas-sify one of these scales, whenever it lies between -0.3 and 0.5 or -0.3 and 0.1, is not possible.

The other type of visual feature, which - in the opposite to previous one - could be used in direct classi-fication, is “maximum difference between horizontal and vertical dark segment length”. The attribute in-form the classification system what is the biggest difference between the horizontal and vertical lengths among all dark segments in the de-fect (segments composed of dark pixels). Figure 6 shows decomposi-tion of the feature for line scale. Figure 7 shows decomposition for “V” scale, which is defect originat-ing from a finishing train. Its shape resembles “V” letter.

Fig. 4. Horizontal to vertical difference of gradient range – Line scale.

Fig. 5. Horizontal to vertical difference of gradient range – Single strip scale.

Table 2. Significance of visual features.

Page 99: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 293 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Fig. 6. Maximum difference between horizontal and vertical dark segment length – Line scale.

Fig. 7. Difference between horizontal / vertical of maximum dark segment length – “V” scale.

In this case a threshold could be set at 0.2. This kind of rule might be used in supporting logic to distinguish those two scale defects.

5. CONCLUSIONS AND PERSPECTIVES

In the paper scale defects occurring at hot rolling mill in Kraków were divided into unique classes. Second part of the paper describes process of Boost-ing Trees model creation for the scale classification. Along with the model, 33 most relevant attributes for the model were selected. These numerical visual features were used to describe each scale class by decomposition of its values. Subsequently, the fea-tures can be used in building reliable classifiers for scale recognition. Only part of the scale visual fea-tures, which could be used in classifier building, were presented in this paper.

The next step in the study will be scale classifier implementation with the use of manufacturer soft-ware. It will depend on manual selection of the scale defects and their assignment to proper class. Never-theless, study assumes that creation of the best pos-sible classifier could be hard to obtain using manu-facturer software. To improve its classification deci-sion some additional rules have to be created. The rules can be written in C++ programming language

and Transact SQL stored procedures. They will be used in the next classification process, called post-classification. This step will be assisted by knowledge described in this paper.

Acknowledgements. The author would like to express his gratitude to Mr Witold Dymek, who supported this work at Hot Rolling Mill in Kraków and Professor Maciej Pietrzyk and Dr Łukasz Rauch for their enthusiastic encouragement and useful cri-tiques of this research.

REFERENCES

Bakker, A., Hoyles, C., Kent, P., Noss, R., 2006, Improving Work Processes by Making the Invisible Visible, Jour-nal of Education and Work, 19, 343-361.

Breitschuh, W., Crowley, G., Dallemagne, L., Deléglise, A., Diaz-Alvarez, J., Valcarcel, J.M., Di Fant, M., Fiori, S., Hemmerlin, M., Koschack, U. Schroyens K., 2007, ArcelorMittal internal defects catalogue.

Han, J., Kamber, M., 2006, Data mining: concepts and tech-niques, University of Illinois, Urbana-Champaign.

Hand, D., Mannila, H., Smyth, P., 2001, Principles of data mining, The MIT Press, Cambridge.

Melfo, W., Dippenaar, R., Reid, M., 2006, In-Situ Study of Scale Formation under Finishing-Mill Operating Condi-tions, Proc. AISTech 2006, Association for Iron & Steel Technology, Cleveland, Ohio, II, 25-35.

Sun, W., Tieu, A.K., Jiang, Z., Lu, C., 2004, High temperature oxide scale characteristics of low carbon steel in hot rolling, Journal of Materials Processing Technology, 155-156, 1307-1312.

Sun, W., Tieu, A.K., Jiang, Z., Lu, C., Zhu, H., 2003, Surface characteristics of oxide scale in hot strip rolling, Journal of Materials Processing Technology, 140, 76-83.

Statsoft, 2006, Elektroniczy Podręcznik Statystyki PL, Kraków (in Polish).

Webb, A., 2002, Statistical Pattern Recognition (2nd Edition), John Wiley & Sons, Ltd., Chichester, England.

DOBÓR NAJISTOTNIEJSZYCH ASPEKTÓW WIZYJNYCH ZJAWISKA WYSTĘPOWANIA

ZGORZELINY Z UŻYCIEM WZMACNIANYCH DRZEW KLASYFIKACYJNYCH

Streszczenie Przedmiotem badań jest zaprojektowanie i wdrożenie sku-

tecznego modelu klasyfikującego wszystkie rodzaje zgorzeliny występujące w walcowni gorącej w Krakowie. Model oraz jego kluczowe zmienne mogą opisać i rozróżnić poszczególne typy zgorzeliny. W ramach pracy postanowiono zająć się techniką pomiarową oraz wykorzystaniem danych pomiarowych do budowy optymalnego klasyfikatora wad tego zjawiska. Danych pomiarowych, dotyczących aspektów wizyjnych pojedynczych obszarów pasma, dostarczył automatyczny system kontroli powierzchni (ASIS), którego podstawy działania przedstawiono w pracy. Otrzymane dane pomiarowe zostały przeanalizowane z wykorzystaniem metod selekcji cech, a wybrane cechy posłu-

Page 100: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 294 –

żyły do budowy klasyfikatora dla wad powierzchni typu zgorze-lina. Klasyfikator zaimplementowany został z wykorzystaniem metod eksploracji danych, które, wraz z otrzymanymi wynika-mi, zostały szczegółowo opisane w niniejszym artykule.

Received: October 27, 2012 Received in a revised form: November 11, 2012

Accepted: November 16, 2012

Page 101: CMMS_nr_2_2013

295 – 303 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

A USER-INSPIRED KNOWLEDGE SYSTEM FOR THE NEEDS OF METAL PROCESSING INDUSTRY

STANISŁAWA KLUSKA-NAWARECKA1*, ZENON PIROWSKI1, ZORA JANČÍKOVÁ2, MILAN VROŽINA2,

JIŘÍ DAVID2, KRZYSZTOF REGULSKI3, DOROTA WILK-KOŁODZIEJCZYK4

1 Foundry Research Institute, Zakopiańska 73, Cracow, Poland 2 Department of Automation and Computer Application of Metallurgy, VŠB – Technical University

Ostrava 17. listopadu 15, Ostrava-Poruba, 708 33 CZECH 3 AGH University of Science and Technology, Mickiewicza 30, Cracow, Poland

4 Andrzej Frycz Modrzewski University, Herlinga-Grudzińskiego ,1 Cracow, Poland *Corresponding author: [email protected]

Abstract

This article describes the works related with the development of an information platform to render accessible the knowledge on casting technologies. The initial part presents the results of a survey on the preferences of potential users of the platform regarding areas of the used knowledge and functionalities provided by the system. The second part contains a presentation of selected modules of the knowledge with attention focussed on their functionalities targeted at the user needs. The guide facilitating the use of the platform is a "virtual handbook". The System is used as a coupling link in the diagnosis of defects in castings, while ontological module serves the purpose of knowledge integration when different sources of knowledge are used.

Key words: artificial intelligence, knowledge, distributed and heterogeneous sources, technology platforms, databases integration, ontologies

1. INTRODUCTION

Currently, the software market offers “knowledge" systems for computer-aided design and simulation processes (CAD / CAM) and also knowledge management tools and industrial infor-mation of the ERP / MRPII type. On the other hand, still very poorly developed area remains that of the technological decision support tools, i.e. expert sys-tems, technology platforms to share domain knowledge, tools for integration of knowledge from distributed and heterogeneous sources.

In recent years, the interest in expert systems supporting diagnosis and technological decision-making process has been subject to some fluctua-

tions . The observed disappointment in this class of systems was due to some difficulties related with collection and application of a sufficient number of rules in knowledge bases. Currently, the develop-ment of algorithms for automated knowledge acqui-sition has aroused a new interest of science centres in inference systems what has been described (Adri-an et al., 2012; Kluska-Nawarecka et al., 2011a; Kluska-Nawarecka et al., 2011c; Mrzygłód et al., 2007; Zygmunt et al., 2012). Studies are carried out on possibilities to apply modern knowledge engi-neering formalisms as it has been written in (Jančíková et al., 2011; Kluska-Nawarecka et al., 2009; Spicka et al., 2010; Švec et al., 2010), includ-ing fuzzy logic, rough sets, decision tables and the

Page 102: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 296 –

use of multimedia techniques to render this knowledge accessible to users.

With the current rush of knowledge and data, unavoidable is the research on the methods of knowledge acquisition and integration, including ontologies enabling modelling of domain knowledge, and ultimately the creation of semanti-cally structured systems.

The article outlines the future plans and gives se-lected results of work aimed at building a system platform with the task of creating and sharing the knowledge from the area of foundry technologies.

2. ANALYSIS OF USER NEEDS

When the work was started on the development of a concept of the structure, and on the substantive content of the system with determination of the functionality of each of its modules, it was consid-ered necessary to refer to the needs of potential users of the system.

Therefore, the main objective of the first stage of the work was considered to be an interview with the industry and scientific communities in Poland and abroad dealing in some way with the casting prac-tice, to determine the need for different types of functionality of the future system rendering the knowledge accessible. The interview was conducted in the form of questionnaires and discussions carried out with the representatives of industry and research centres. Surveys covered a specific range of the sys-tem utility, namely the area rendering available the knowledge components.

The selected companies represented the large, small and medium-sized enterprises (joint stock companies, limited liability companies); experts

were selected from the circles of the scientists coop-erating with plants processing different types of metals.

Respondents were asked to indicate the types of knowledge they believe are most commonly de-manded by the manufacturing plants. In the ques-tionnaire they were given the following options: literature knowledge about processes, descrip-

tions of processes and applied technologies, handbooks including descriptions of the possible types of treatment

norms, certifications, standards, Polish Stand-ards, ISO, etc.

visuals: pictures, computer simulations, photographs of castings and defects, and micro-

structures reports and studies of own production,

document templates, balance sheets, ready compilations

specification of requirements and properties, and chemical compositions of materials.

tables, databases expert knowledge in case of defects - irregulari-

ties in the process, request for expert opinion Branch stats, volume of production etc. market analysis marketing data, studies, foresight industry statistics, statistical yearbooks, data on

production volume in a given sector, data from Chief Statistical Office (CSO) The results of survey are presented in the form

of a diagram in figure 1.

It is clear that professionals reach most often for multimedia resources in the form of photographs and simulations, also for diagrams and visuals in the

Fig. 1. User preferences on the types of shared knowledge.

Page 103: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 297 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

form of charts and diagrams. As important the fol-lowing ones were also identified: norms, standards and certifications, literature knowledge and handbooks, specification of requirements and properties,

chemical composition of materials, tables, data-bases,

market analysis and marketing studies, expert knowledge when it is necessary to have

an expertise performed. Of minor importance was considered industry

statistics, and reports and studies of own production. Probably the reason is effective circulation of such information and needs satisfied by the already exist-ing tools and management systems.

In order to clarify the need for different types of knowledge, the respondents were requested to assess the individual potential functionalities of the future information system operating in their companies. The proposed list of functional features is presented in table 1.

Table 1. List of potential functionalities of an information sys-tem.

Potential functionalities of information system

virtual handbook descriptions of processes and applied technologies, handbooks including description of possible treatments, access to publica-tions visuals: pictures, simulations photographs of castings and defects, microstructures

electronic standards, certifications Polish standards, ISO standards, etc. databases, catalogues specifications of requirements and properties, chemical com-positions of materials, catalogues of materials e-learning virtual training in advanced manufacturing technologies, inter-active courses in casting techniques expert systems discovering the causes of defects, detection of process irregu-larities tools for classification determination of defect types and class/grade of material from which the product should be made, etc. tools to make reports based on production data, statements and reports on e.g. pro-duction volume, consumption of materials, costs, severity of defects

The list of potential functionalities of the system

was based on the conducted research and currently available capabilities of a system developed as a result of cooperation between the team of workers from the Foundry Research Institute and Knowledge

Engineering Team from the Department of Applied Informatics and Modelling at the Faculty of Metals Engineering and Industrial Computer Science, Uni-versity of Science and Technology, Cracow, Poland.

Definitely the highest rated was the proposed "virtual handbok" - a platform to share the collec-tions of documents with descriptions of processes and technologies, and handbooks and publications in electronic form.

Also highly rated were the visuals: photographs, simulations, pictures, photographs of castings and defects, and microstructures, as well as databases, catalogues, the specifications of requirements and properties, chemical compositions of materials, cata-logues of materials – all of them reflecting the most common needs. The responses obtained allowed establishing the following ranking of other function-alities: 1. Databases, catalogues; 2. Expert systems; 3.4 Classification tools, reporting tools; 5. Electronic standards, certifications; 6. E-learning.

As a general conclusion from the survey and discussions held, it can be stated that such knowledge sharing is needed that, while giving the user an easy to handle interface, will also ensure a constant supply of current information and knowledge from the area of the casting practice, not only derived from the literature, but also from all other sources such as databases, or knowledge ob-tained algorithmically from the process data. To achieve this, the above mentioned sources will have to be integrated and made ready for processing (e.g. indexed for convenient search).

At the same time, the need arises to design a knowledge base in such a way as to make it inter-active, to enable user to get through a dialogue with the system just this knowledge that is necessary for solving of a problem.

3. KNOWLEDGE-SHARING PLATFORM

When the concept of a knowledge-sharing plat-form on casting technologies has been created, it was assumed that the platform should include all major solutions developed in the course of previous works on the computer-aided manufacturing pro-cesses. At the same time it should be enriched with new modules and functionalities, targeted at meeting the user's preferences as regards the application of new trends and opportunities that arise from the

Page 104: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 298 –

development of methods and technologies based on computer science what has been said in (David et al., 2011; Olejarczyk et al., 2010).

Consequently, the platform has a multi-module structure, where individual modules can operate independently, and the results of their actions are subjected (if necessary) to the process of integra-tion.

The degree and manner of this integration de-pends mainly on the scenario of actions taken by the user (when using the system in an interactive mode).

Among the modules already existing and availa-ble on the Internet, one can mention the Infocast system (including databases on publications, stand-ards, and catalogues) and the Castexpert system designed to serve as a tool for the diagnosis of cast-ing defects assisted with knowledge presented in the form of multimedia.

Below are outlined the results of the implemen-tation of additional modules specific for the opera-tion of the whole platform and which received most attention from the users.

3.1. Virtual handbook The virtual handbook, whose schematic diagram

is shown in figure 2, is a kind of clause, linkining together the functionalities of other modules of the platform. Using this tool, the user gets a general idea about the type of knowledge provided by the plat-form, and as a result can find out which variant in the scenario will lead him to a solution of the prob-lem.

A casual scenario of the use of the handbook is presented as an example in table 2.

3.2. System for decision support and classifica-

tion of defects in castings The RoughCast system allows the use of approx-

imate logic to enable a classification of casting de-fects according to international standards: Polish, Czech and French. The databases can be expanded in the future to include other standards.

The approximate logic is a tool to model the un-certainty arising from incomplete knowledge result-ing from the granularity of information. The main application of approximate logic is in classification,

Fig. 2. Specification of functional requirements for the knowledge module „Virtual Handbook”.

Page 105: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 299 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Table 2. An example of the use of virtual handbook.

as with this logic it is possible to build models of approximation for a family of the sets of elements for which the membership in sets is determined by the attributes. The conducted research allowed de-veloping a methodology for the creation of decision tables to serve the knowledge of casting defects.

Using this methodology, a decision table was developed for the selected defects in steel castings. A fragment of the array is presented in table 3.

Based on rough set theory, elementary sets can

be determined in the array. Sets determined in this way represent the division of the universe in terms of the indiscernibility relations for the attribute dis-tribution.

The most important step is to determine the up-per and lower approximations in the form of a pair of precise sets. Abstract class is the smallest unit in the calculation of rough sets. Depending on the que-

Name: Virtual handbook

Actors End User, Expert, Knowledge Engineer

Sharehold-ers/Stakeholders:

Artificial intelligence Data Knowledge engineering Sources Data Mining Internet

Short description Preparing a specialised Virtual Handbook.

Preliminary conditions The user must have access to a computer and specified topic of the handbook

Final conditions Handbook ready to display. Database updated and saved .

The main flow of events 1. The user opens the Virtual Handbook interface (on-line) 2. The user writes in the subject 3. The system analyses the subject

a) Finds data on the Internet and in the documents and databases in natural language b) Collects statistical data c) Searches alternative sources of knowledge (literature, source materials, expert knowledge, etc.)

4. Cataloguing of data: a) Taken from the Internet by means of the Data Mining methods b) Statistical data using rule induction algorithms c) Data from sub-item 4.b and alternative sources of knowledge 3.c using decision tables

5. Saving in XML files 6. Algorithms of artificial intelligence are preparing relevant data for display in the handbook (semantic

analysis with the use of ontologies) 7. Displaying the appropriate page of the handbook

Special requirements Device with access to Internet.

Table 3. Fragment of decision table for defects in steel castings

Page 106: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 300 –

ry, the upper and lower approximation is calculated by summing up the respective elementary sets.

The system operates on decision-making tables - this structure of the data allows the use of an infer-ence engine based on approximate logic. The system maintains a dialogue with the user asking questions about the successive attributes. The user can select the answer (the required attribute) in an intuitive manner. Owing to this method of the formulation of queries, there is no need for the user to know the syntax and semantics of queries in the approximate logic. However, to make this dialogue possible without the need to build a query by the user in the language of logic, the system was equipped with a query interpreter in a semantics much more narrow than the original Pawlak semantics. It has been as-sumed that the most convenient way to build queries in the case of casting defects is by selection from a list of attributes required for a given object (de-fect). The user chooses which characteristics (attrib-utes) the selected defect has. This approach is con-sistent with the situation occurring every day when the user has to deal with specific cases of defects, and not with the hypothetical tuples. Thus set up queries are limited to conjunctions of attributes, and therefore the query interpreter has been equipped with only this one logical operator. The knowledge base created for the needs of a RoughCast system is the embodiment of an information system in the form of decision-making tables. It contains tabulated knowledge on the characteristics of defects in steel castings taken from Polish Standards, Czech studies, French directory of casting defects, and German textbook of defects.

Using this formalism it becomes possible to solve a number of difficulties arising from the foundry knowledge granularity in the form of indis-tinguishable descriptions created with attributes and inconsistent classifications from various sources (as in the case of standards for steel castings).

3.3. Cluster Analisys

General design requirements apply to the data

mining system, whose main objective is to classify documents (articles) by a thematic classification. The implementation of task module was developed based on the full-text clustering method, supported by the use of a thesaurus. The process of full-text cluster analysis was used to create the task category (conceptual clustering) as a method of unsupervised learning. The aim was to design a system operating

efficiently, which, based on the documents provided (in the correct format and in accordance with the established standards and norms), will carry out the task of clustering the documents by thematic classi-fication based on data mining methods - cluster analysis. This module is fully compatible with the directly cooperating document repository systems and databases, and should be, to the greatest extent possible, susceptible to subsequent modifications or development.

In the task of conceptual grouping, the training set Ω is a collection of articles provided with a com-patible system in the form of a knowledge base, while the task of the aggregation analysis is to split these objects into categories (aggregations) and con-struct a description of each category (aggregation centroids), enabling the efficient classification of new articles. As a result of this process, each article is included into one of the resulting aggregations. Each of the resulting aggregations has its centroid, which represents a concept associated with this ag-gregation.

The classification of text documents is a very complex problem. The main reason for the difficul-ties is the semantic complexity of natural language. The difficulties associated with the classification of documents written in natural language are related to, among others, polysemy, i.e. terms having many different meanings. For example, the term 'table' may refer to either a piece of furniture, or it may also mean a set of specifically arranged data, num-bers, etc. The dimension of feature space in the doc-ument classification tasks, related to the number of possible words in a natural language (usually the order of tens of thousands of words), is also a diffi-culty. On the other hand, the representation of doc-uments using the selected (small) number of words will reduce the quality of classification.

Therefore, the first step of the task of conceptual clustering is to reduce the articles from the knowledge base to the basic grammatical forms. The process of cluster analysis will compare the sets of articles (with each other) in search for common words excluding irrelevant words (or, also, but, etc.). Articles will be grouped in clusters on the basis of the probability of adjustment determined by the number of occurrences of the common words, and on this basis the aggregation centroid will be creat-ed, as presented in figure 3.

To improve the quality of cluster formation and classification of new articles, these processes will be supported by a thesaurus (a structured set of key-

Page 107: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 301 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

words), which will also eliminate the problem when compared to the sets that do not have the same key-words in the text, and yet belong to the same category.

Constraints can be formulated in OCL for the class Aggregation_Centroid as exemplified is given below.

context Aggregation_Centroid inv self.ID_Centroid >=1 and self.Aggregation_Centroid_Path<>'' and Aggregation_Centroid.allinstance ->forall (p1,p2) p1<>p2 implies p1.ID_Centroid <> p2.ID_Centroid and Aggregation_Centroid.allinstance ->forall (p1,p2) p1<>p2 implies p1.Aggregation_Centroid_Path<> p2.Aggregation_Centroid_Path

context Aggrega-tion_Centroid::Generate_Aggregation_Centroid() : Boolean pre: self.Aggregation_Centroid_Path = '' post self.Aggregation_Centroid_Path<>'' context Aggregation_Centroid::get_ID : Inteager post rezult = self.ID_Centroid context Aggregation_Centroid::get_PATH : String post: rezult self.Aggregation_Centroid_Path

The use case diagram is shown in figure 4. In table 4 there is a description of one of possi-

ble utilizations discussed algorithm.

Fig. 3. Diagram of classes in the module of document aggregation analysis.

Fig. 4. Use Case Diagram.

Page 108: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 302 –

Table 4. Description of the case of use.

4. FINAL REMARKS

The article describes solutions developed for se-lected modules of a platform for sharing the knowledge of foundry technologies in the context of the preferences expressed by users. It seems that studies carried out to create this context make an interesting contribution to the contents of this article, since the results of such surveys are not often dis-closed in the presentations of different expert and decision-making systems.

Selecting modules of the platform described in the article, it was attempted, on the one hand, to show their diversity and, on the other, to expose the way by which they will be adapted to the declared user needs. It has been the intention of the creators and promoters of the platform to offer a system that will have an evolving nature, and will be gradually enriched with new modules, according to the emerg-ing needs. Currently work is underway on the im-plementation of modules of the automatic acquisi-tion of knowledge from the Internet and on the anal-ysis and classification of text documents which is presented (Kluska-Nawarecka et al., 2011b).

Scientific work financed from funds for the sci-

entific research as an international project. Decision No. 820/N-Czechy/2010/0.

REFERENCES

Adrian, A., Mrzygłód, B., Durak, J., 2012, Model strukturyzacji wiedzy dla systemu wspomagania decyzji, Hutnik - Wiadomości Hutnicze, 1, 67-70 (in Polish).

David, J., Vrožina, M., Jančíková, Z., 2011, Determination of crystallizer service life on continuous steel casting by means of the knowledge system, Transactions On Cir-cuits And Systems, 10, 359-369.

Jančíková, Z., Ružiak, I., Kopal, I., Jonšta, P., 2011, The Influ-ence of Rubber Blend Aging and Sample on Heat Transport Phenomena, Defect and Diffusion Forum, 312-315, 183-186.

Kluska-Nawarecka, S., Wilk-Kołodziejczyk, D., Dobrowolski, G., Nawarecki, E., 2009, Structuralization of knowledge about casting defects diagnosis based on rough sets theo-ry, Computer Methods in Materials Science, 9, 341-346.

Kluska-Nawarecka, S., Wilk-Kołodziejczyk, D., Regulski, K., 2011a, Practical aspects of knowledge integration using attribute tables generated from relational databases, Se-mantic Methods for Knowledge Management and Communication, eds, Katarzyniak, R., Chiu, T.F., Hong, C.F., Nguyen, N.T., Springer, Berlin, Heidelberg, 13-22.

Kluska-Nawarecka, S., Wilk-Kołodziejczyk, D., Dziaduś-Rudnicka, J., Smolarek-Grzyb, A., 2011b, Acquisition of technology knowledge from online information sources, Archives of Foundry Engineering, 11, 107-112.

Kluska-Nawarecka, S., Wilk-Kołodziejczyk, D., Regulski, K., Dobrowolski, G., 2011c, Rough Sets Applied to the RoughCast System for Steel Castings, Inteligent In-formation and Database Systems, Proc. Conf. Third In-ternational Conference, ACIIDS 2011, eds, Nguyen, N.T., Kim, C.G., Janiak, A., Daegu, Korea, 52-61.

Mrzygłód, B., Adrian, A., Kluska-Nawarecka, S., Marcjan, R., 2007, Application of Bayesian network in the diagnosis of hot-dip galvanising process, Computer Methods in Materials Science, 7, 317-323.

Name: Automatic clustering of documents

Actors: System

(Stakeholders/Interests) The system manages the service module of the operations of the database. It performs import operations on documents from a knowledge base and export operations on the resulting database to ontology classes. The system also manages the module of document clustering by the method of Data Mining.

Short description: Automatic clustering of documents in terms of thematic fit.

Pre-conditions: Obtaining edited documents and information in the form of a knowledge base

Post-conditions: Thematically grouped forwarded database of articles.

Main flow of events: 1. Receiving knowledge base 2. System performs cluster analysis for the resulting knowledge base. 3. System creates a new database of articles grouped thematically based on cluster analysis carried out. 4. System sends the created database.

Alternative flow of events: 1. When new article is downloaded, the cluster analysis classifies it into one of the topics. 2. When there is a limit to the number of articles classified to various topics, and all these articles are

characterised by a high degree of fit with each other, then a new topic will be created to which these articles will be assigned.

Special requirements: 1. For the main flow of events to occur, the knowledge base must be delivered. 2. For an alternative flow of events to occur, the artificial intelligence system must initiate the delivery

of a new article.

Page 109: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 303 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Olejarczyk, I., Adrian, A., Adrian, H., Mrzygłód, B., 2010, Algorithm for controling of quench hardening process of constructional steels, Archives of Metallurgy and Mate-rials, 55, 171-179.

Spicka, I., Heger, M., Franz, J., 2010, The mathematical-physical models and the neural network exploitation for time prediction of cooling down low range specimen, Archives of Metallurgy and Materials, 55, 921-926.

Švec, P., Jančíková, Z., Melecký, J., Koštial, P., 2010, Imple-mentation of neural networks for prediction of chemical composition of refining slag, Proc. Conf. Metal 2010, International Conference on Metallurgy and Materials, Tanger, 155-159.

Zygmunt, A., Koźlak, J., Nawarecki, E., 2012, Analiza otwar-tych zrodel internetowych z zastosowaniem metodologii sieci spolecznych, Bialy wywiad: otwarte zrodla infor-macji – wokol teorii i praktyki, eds: W. Filipkowski, W. Mądrzejowski, C.H. Beck, Warszawa, 197-221 (in Po-lish).

SYSTEM UDOSTĘPNIANIA WIEDZY INSPIROWANY

POTRZEBAMI UŻYTKOWNIKA Z ZAKRESU PRZEMYSŁU PRZETWÓRSTWA METALI

Streszczenie Artykuł dotyczy prac związanych z realizacją platformy in-

formatycznej, służącej od udostępnienia wiedzy z zakresu tech-nologii odlewniczych. W części początkowej przedstawiono rezultaty sondażu dotyczącego preferencji potencjalnych użyt-kowników platformy odnośnie obszarów wykorzystywanej wiedzy oraz funkcjonalności udostępnianych przez system. Część druga zawiera prezentację wybranych modułów wiedzy ze zwróceniem uwagi na ich funkcjonalności ukierunkowane na potrzeby użytkowników. Rolę przewodnika ułatwiającego ko-rzystanie z platformy pełni „wirtualny poradnik”, system Rought Cast służy do sprzęgania diagnostyki wad odlewów, zaś moduł ontologiczny, pozwala na integrację wiedzy pochodzącej z różnych źródeł.

Received: October 16, 2012 Received in a revised form: November 28, 2012

Accepted: December 7, 2012

Page 110: CMMS_nr_2_2013

304 – 312 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

THE PLATFORM FOR SEMANTIC INTEGRATION AND SHARING

TECHNOLOGICAL KNOWLEDGE ON METAL PROCESSING AND CASTING

STANISŁAWA KLUSKA-NAWARECKA1, EDWARD NAWARECKI2, GRZEGORZ DOBROWOLSKI2, ARKADIUSZ HARATYM2, KRZYSZTOF REGULSKI2*

1 Foundry Research Institute, Zakopianska 73; Cracow, Poland

2 AGH University of Science and Technology, Mickiewicza 30, 30-059 Cracow, Poland *Corresponding author: [email protected]

Abstract

This paper presents the concept of knowledge sharing platform, which uses an ontological model for integration pur-poses. The platform is expected to serve the needs of the metals processing industry, and its immediate purpose is to build an integrated knowledge base, which will allow the semantic search supported by domain ontology. Semantic search will resolve the difficulties encountered in the class of Information Retrieval Systems associated with polysemy and syno-nyms, and will make the search for properties (relations), not just the keywords, possible. An open platform model using Semantic Media Wiki in conjunction with the author's script parsing the domain ontology will be presented. Key words: knowledge integration, knowledge base, decision support, semantic search, metal processing, casting

1. CONTEXT OF THE RESEARCH WORKS

For many years, at the Foundry Research Insti-tute in Cracow, construction tools for integrated knowledge bases have been developed (Dobrowolski et al., 2007; Górny et al., 2009; Kluska-Nawarecka et al., 2002; Nawarecki et al., 2012; Kluska-Nawarecka et al., 2005). The studies carried out at present are aimed at improving the information re-trieval systems (IR systems) in such a way as to make the collection and sharing of documents easier and more functional from the user’s point of view. In 1997, the Institute launched a SINTE database, which is a bibliographic casting database containing abstracts of over 38,000 articles published in various casting journals (American, English, French, Ger-man, Czech, Slovenian, Russian, Ukrainian), pro-ceedings of conferences, and R&D works written by

the staff of the Foundry Research Institute. Together with NORCAST, CASTSTOP and a CASTEXPERT diagnostic system, the SINTE database forms a part of the INFOCAST system, a decentralised decision-making information system which is intended to support the casting technology both in industry and in scientific and research work (Marcjan et al., 2002). In thus extended knowledge base it becomes increasingly difficult to reach to the information searched. This is particularly true when the user does not know in advance what kind of resources will be of interest to him, that is, whether he is looking for publications on the subject indicated, or for the in-formation on standards and certificates, or for knowledge in the form of rules or guidance on the characteristics of materials and physico-chemical properties.

Page 111: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 305 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

In extended IR systems, such as the SINTE da-tabase, there are internal tools for cataloguing the content, usually based on keywords. Categorising is done with the help of a structured thesaurus, which is a set of descriptors arranged in hierarchies. These descriptors allow us to describe the content of the document and make the search of the database pos-sible, as it happens in the library catalogues.

However, this way of cataloguing the content has some important limitations. The search based on keywords does not allow accounting in the results for the natural semantics of the measure of distance between the query and a set of documents. It like-wise gives no possibility for defining the rating of response (documents) in relation to the request. Problems associated with polysemy (variety of meanings depending on the context) or synonyms (if the word does not appear in the document, although the document is closely connected with that word) pushed researchers towards semantic description of documents using models in description logic (Ciszewski & Kluska-Nawarecka, 2004; Mrzygłód & Regulski, 2012; Regulski et al., 2008).

Among the knowledge bases used in the Foundry Research Institute, a CASTSTOP system also appears (Połcik, 1999). It allows the selection of cast materials based on their physico-chemical and technological properties. Additionally, the database offers the possibility to search for Polish counter-parts of foreign standard alloy grades, such as grey cast iron, malleable cast iron, and cast alloys of Al, Cu, Zn and Mg. Total content of the database in-cludes information on more than 1000 alloys. Knowing the technical requirements of the designed product, operating via a system, the user can select the appropriate cast alloy meeting these require-ments. However, also in this case, the system has some limitations, namely it does not include knowledge of the material properties that can be obtained by thermal or mechanical treatment. Such information goes beyond the scope of standardised material properties, and thus cannot be easily com-pared for different national or foreign suppliers.

An attempt to create an algorithm which will en-able searching the material by its physico-chemical or mechanical parameters, taking also into account the possible upgrading process, requires the use of a knowledge model of the treatment processes. The answer and the way to solve this problem, also in this situation, can be the semantic search based on ontological model.

2. THE PLATFORM FOR SEMANTIC INTEGRATION AND SHARING KNOWLEDGE ON METAL CASTING – CASTWIKI

The knowledge sharing platform should act as an intermediary between the user and heterogeneous sources of knowledge. Using appropriate knowledge formalisms, such as description logic-based ontolo-gies, it aims at integration of resources in such a way that going through a variety of sources is done with a significant benefit to the user, and also in a way transparent to him. The platform aims at knowledge sharing for non-routine tasks that are difficult to predict in the normal course of production. The solutions and implementations presented in the pre-vious section comprise the knowledge modules that can operate independently (as evidenced by the experience), but using them to create an integrated information and decision-making system enriched ontologically would provide additional functionali-ties and make it easier for the user to use the func-tionalities already existing.

The platform is to be conceived as an ontologi-cal tool for integration of various subsystems in a semantic network, where individual packets of information and data, as well as components of the knowledge transferred in the system are described using metadata in accordance with the shared onto-logical model, so that they can be explicitly used (shared) by the individual modules and also remain ready for reuse by other computerised systems, as the Platform is an open system.

Recent studies done at the Foundry Research In-stitute among experts from the world of industry led to the conclusion that the goal should be to create a platform for knowledge sharing that by giving the user an easy-to-use interface would provide him with a steady supply of current information and knowledge in the field of the metalcasting practice, coming not only from the literature, but also from all other sources, such as databases, or knowledge ob-tained algorithmically from the process data. To achieve this, these sources will need to be properly integrated and ready for processing (at least indexed for easier search). The system should allow the de-sign of a knowledge base in such a way that it is interactive and makes the codification of expert technological knowledge possible. The task of the proposed system will be the semantic integration of the collected data, information and knowledge. The aim will be to provide the end user with a transpar-ent access to the integrated knowledge base based on

Page 112: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 306 –

heterogeneous resources. The integration tool will be ontologies.

3. ONTOLOGIES – MODELS IN DESCRIPTION LOGIC – APPLICATION

The Descriptive or Description Logic (DL) is a subset of First Order Logic (FOL), which can be used to represent a domain in a formalised and structured and, at the same time, computer-pro-cessable manner. The basic element of representa-tion are unary predicates corresponding to a set of objects and binary predicates mapping relationships between objects. The descriptive logic allows creat-ing definite descriptions that depict the domain using concepts (unary predicates) and roles (binary predi-cates).

For example, a steel casting with hot cracks in two places, in the DL record may look like this: Casting made.CastSteel (2 defective)

defective. HotCrack Hence it follows that concepts can be built from

atomic ideas (casting, cast steel, HotCrack), atomic roles (made = made of, defective = having a defect), and constructors. Using logic one can create defini-tions of concepts:

Casting Product cast.Mould or axioms:

CastSteel Alloy („everyCastSteel is Alloy”)

Description logic was created for the ontology, and therefore a very important issue was to create simultaneously such a language that would allow the symbolic language of logic to be written in the form of computer code. Such language for the description logic has proved to be OWL (Web Ontology Lan-guage). Creating a kind of shared formal language, ontologies allow integrating a wide variety of distrib-uted sources of knowledge in a given field, over-coming the problem of differences in the systemic, syntactic, and semantic areas which, so far, has been the biggest challenge for computer tools, often pre-venting a clear identification of the concept, and therefore, finding information related to this concept.

Ontology is not a database schema, but a simplifi-cation can be used: for knowledge repositories, ontol-ogy is that what the entity diagram is for a database; it is a diagram, a model of a field of knowledge, under-standable by both computers and humans.

The Department of Applied Informatics and Modelling at the AGH Department of Metals Engi-neering and Industrial Computer Science has devel-oped a domain ontology for different cast iron grades and changes in their properties under the influence of treatment presented in figure 1. Directly under the parent class there are 5 main categories: Treatment, Alloying_elements, Carbon_form, Prop-erties, and Cast Iron. All together, accumulate more than 90 general terms, on which the reasoning in CastWiki Platform will be based.

Fig. 1. Symbolic representation with directed graph of a fragment of domain ontology in the field of cast iron.

Page 113: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 307 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

3.1. OntoGRator System At the Foundry Research Institute in Cracow,

attempts to use ontologies for integration of knowledge in the field of metal processing have already been going on for several years in coopera-tion with the AGH Department of Computer Science (Dobrowolski et al., 2007; Regulski et al., 2008). The OntoGRator system developed at that time al-lowed describing in a strict and formal manner the area that the integrated data were related with and also specifying the semantics of integrated re-sources. Although the OntoGRator system was solving problems of a semantic description of the area which the processing of metals is, it did not enjoy the sympathy of its potential users. The lack of success was due to, among others, a very complex structure of the system, which consisted of the two main subsystems: OntoGRator Engine - engine that integrates data

from multiple heterogeneous sources, including information on the problem area contained in domain ontology. This subsystem provides the data, which it has integrated, in the form of new, expanded by the data from external sources, ontology, operating through the Jena API pro-gramming interface,

OntoGRator Web - application in J2EE technol-ogy presenting in the form of web pages the in-tegrated ontologies available through the Jena API programming interface. A user who was not a knowledge engineer

might have great difficulties in understanding the operating principle of the system. Without basic knowledge of the ontology, the system was becom-ing incomprehensible. Additionally, the mere idea of the system assumed its ability to integrate structured knowledge resources, such as databases and docu-ments from servers located in the Internet, which the user had access to (Adrian et al., 2007). However, this assumption turned out to be ahead of its time - the actual databases often did not provide any API, and user had no access to them via the network in-terface.

Much more functional has proved to be the sys-tem that provided the ability to insert the content directly, instead of placing the URL / URI for each resource.

3.2. CastWiki Platform The example of Wikipedia (http://

en.wikipedia.org/) shows that it is possible to create a system in which each user has the ability to edit and add content, and at the same time a high level of quality of the accumulated knowledge is maintained through supervision and control, and the ability of other users to introduce their own amendments. Participation in editing Wikipedia is voluntary and unpaid, and millions of users around the world every day add new definitions and edit the existing ones. Wikipedia's success has inspired software develop-ers to implement industrial systems operating on the same principle, but being the sole repository of a company. Systems operating in this way, known as content management systems (CMS) or idea man-agement systems, inherit from Wikipedia several advantages: Wiki tools are a popular source of information

and knowledge, which most of internet users have already encountered and become familiar with,

Wiki keeps the knowledge resources constantly updated through editing, but discussion leading to the development of a final version of the problem is an integral part of the entry in Wiki,

Wiki technology is as simple as possible, it re-quires minimal skills to edit and add new con-tent, and is available to everyone,

Wiki structure provides a description of the concepts in natural language, and at the same time contains a unique URI which is an effective way to identify concepts in the knowledge model. Thus, a wiki-type tool successfully meets the

most important demands of the knowledge manage-ment: it allows the codification of knowledge, re-cording of experience and results of the creation of new (experimental) knowledge by free editing of entries, supports discussion on specific concepts, giving the opportunity to generate the phenomenon of externalisation of knowledge, and by maintaining the history of discussion on a given topic can also be personalised. The ability to create a "stub article", which is only a draft definition allowing for the extension, too short to serve as a definition of the encyclopaedic nature, but still giving some infor-mation about the topic, is a key aspect here. It is precisely in this way, by creating first a short de-scription, incomplete and uncertain, that we allow the discussion to be started on a given topic. Other

Page 114: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 308 –

users can participate in determining the definition of the concept, adding fragments of the description. Such a scheme of action allows the collective crea-tion of knowledge resources, so-called, shared con-ceptualisation. Creating the "stub articles" is a vol-untary activity, the aim of which is to liberate the externalisation of knowledge by encouraging discus-sion on a given topic.

However, Wikipedia as a public system is not an acceptable solution for companies that need to re-strict access to their knowledge. Knowledge in in-dustrial plants is valuable, but also highly special-ised. That is why it was necessary to create a separate platform, using standard Wiki specifically for the needs of foundry plants. MediaWiki software was applied as a platform used by Wikipedia and made available under the GPL licence.

The system called CastWiki schematically pre-sented in figure 2 is designed to provide a platform for the exchange of knowledge and saving the cast-ing knowledge by specialists in the field of metal processing. Wiki mechanism allows the inclusion of such types of content as: descriptions in natural language, graphic files, hypertext links to other concepts in CastWiki, links to all the resources available in the network

(documents, catalogues, images, photographs, animations, databases) and having its own URL.

In this way, it allows the integration of knowledge already stored in digital form, being a component of other knowledge systems (e.g. INFOCAST, CastExpert + etc).

The integration of these data and knowledge re-sources (as well as those added during the use of sources) consists in describing the resources by

terms (concepts) of ontology, then mapping their structure to the underlying ontology components. For each class in the ontology, a description in natu-ral language can be added. Ontology editing tools such as Protégé permit the placement of descriptions in text form directly in the description of the OWL ontology. They also allow user to place references in the form of a URL. This gives the possibility to transfer the unstructured knowledge, which the defi-nitions in natural language are, and photographs to CastWiki knowledge base. Each concept (article) in CastWiki acquires its unique URI / IRI, which can also serve as a reference to the class description in ontology.

The problem in Wiki-class systems is page duplication and redundancy of knowledge. Co-creation by multiple users makes the situation when for the same substantive term there are several arti-cles under various entries (e.g, L200HNM cast steel, which is also G200CrMoNi4-3-3 cast iron). In this situation, it is necessary to integrate duplicate arti-cles and create redirection of individual entries to the integrated article. Ontology facilitates the analy-sis of overlapping terms through the rdf: SeeAlso property, which allows placing adequate terms di-rectly in ontological description, thus greatly facili-tating the work of CastWiki editors.

Another problem that is solved due to this structure of the knowledge model is the problem of

homonyms. Concepts with the same name require the creation of an additional article in CastWiki, which will be a list of words that share the same spelling and pronunciation with a short note about the context of each word. Ontology also solves the problem of homonyms: the model itself cannot have two classes with the same name, which forces the

Fig. 2. Description of classes in CastWiki.

Page 115: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 309 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

ontology engineer to extend the class name in such a way as to give the context in accordance with the namespace.

Ontology also provides the ability to create a more structured knowledge base than the tradi-tional wiki approach. Namely, Wikipedia does not allow the definition of relationships. The concept, which is a non-autonomous object, cannot have a representation in the form of a Wikipedia entry. Creating one’s own CastWiki system is a way to avoid this limitation. Users gain the ability to create descriptions not only for classes and instances in the ontology, but also for relationships (object proper-ties).

For example, in an ontology there is a relation-ship preventive_means, for which the basic Wiki version has no place in a separate article. Therefore, knowledge about how to prevent the casting defect would have to be included in the description of a particular type of defect. CastWiki allows us to create an article integrated with the preven-tive_means relationship so that the user can easily create a document that contains basic information about how to prevent defects, while collecting all the known resources of knowledge on this subject (including specification of defects which are related in the article, or links to specific procedures to pre-vent defects). This form of knowledge collection provides hypertext structure of the system and ease of navigation across the resources offered to the user who needs no preliminary knowledge of the con-ceptual model of the system. At the same time, the user navigating across the ontology can easily find a definite description of relationship, not just its mem-ber classes.

3.3. Semantic CastWiki

Semantic MediaWiki (SMW) is a complex

semantic extension of the MediaWiki platform (a free Wiki-type solution licensed by open source, developed by the Wikimedia Foundation, which is a basis for most of the projects such as Wikipedia, Wiktionary and Wikinews.) by the mechanisms to improve extraction, search, valuation, marking and publishing of Wiki content. It also provides a plat-form for software development, which makes it the fastest growing project of this type in the network. In the present work, this application provides the basic mechanisms for the performance of semantic issues.

Semantic annotations that have been developed for SMW are designed in such a way as to enable a faithful export of ontologies in OWL DL format. It is worth mentioning that the SMW user interface does not require a formal interpretation of OWL DL, nor does it impose restrictions on expressiveness. OWL DL ontology structures can be divided into instances that represent individual elements of a particular domain, classes which are aggregates of instances with the same characteristics, and attrib-utes describing logical relationships between instances. The way in which SMW represents knowledge was partly inspired by solutions such as Web Ontology Language, which allows performing an easy transcription from one format to another. From a technical point of view, the MediaWiki plat-form uses the namespaces to group the pages by content. This mechanism was also used in the clus-tering of ontology elements: OWL individuals - these elements are repre-

sented as regular articles. Pages of this type account for a significant portion of the data contained in Wiki. Usually they are grouped in the main namespace, but can be stored in some other spaces, too (People space, Image space).

OWL classes - they have a counterpart in the basic mechanisms of MediaWiki as a category. The category system, which has been an integral part of the MediaWiki platform since 2004, quickly became the main tool for the classifica-tion of documents in Wikipedia and other Wikimedia Foundation projects. Category pages are grouped in the namespace of the same name. They can be organised hierarchically in a similar way as it is done in OWL ontologies.

OWL properties - the relationships between ontology elements have no counterpart in the MediaWiki engine, and are supplied with the extension of SMW. OWL distinguishes the relationships between data (assigning numerical value to ontological element) and between objects (the relation of two ontological ele-ments). Semantic MediaWiki simplifies this di-vision by aggregating all types of relationships in the namespace called Property. In order to easily browse the semantic annota-

tions found on the Wiki page, a factbox is used. It allows users to view the most important facts about the subject. For those who are supporting and com-plementing Wiki, it is also a tool to validate the correctness of the Wiki engine "reasoning" in re-spect to the annotations introduced earlier. The in-

Page 116: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 310 –

formation is displayed in two columns: the first, starting from the left, contains attributes used on the page (e.g. the population), while the second one stores the values assigned to them (e.g. 340, 000). Each attribute name is also a link to its site, where one can usually find basic information about this attribute (meaning, use). Depending on the design of the attribute, the fact table may include, for example, its value in different units of measurement.

Next to each attribute there is a special magni-fying glass icon which is a link to a simple semantic search engine. For example, if a web page is labelled [[is an alloy::Iron]], the user will receive a list of all the sites that meet this requirement. Below the list there is a form in which one can specify any desired attribute-value pair. If the attribute value is numeric, the search engine can also provide pages with an approximation of this value. In the header of the fact table there is an eye icon, which allows quick browsing of all the semantic annotations as pre-sented in figure 3.

3.4. Parsing of domain ontologies – implementation of the script

The recommended method of entry of the ontol-

ogy into the Wiki structure is by creating one’s own Wiki parsing script. SMW initially circulated such a possibility, but a multitude of ontological formats and change in the approach of the authors of the application to the construction of semantic structures in Wiki caused giving up the idea of a development of this tool (in most of the recently launched scenes it has been removed completely). Author's script, in turn, allows a relatively easy optimisation, depend-ing on the user needs. Taking into account the avail-able libraries working with MediaWiki it also gives the possibility of parsing most of the popular ontol-ogy formats.

The script applied in SemaWiki to load the on-tology was implemented using the Python Wikipedi-abot (pywikipedia) programming platform. It is a set of tools to automate the work on the pages of Me-diaWiki and other popular Wiki engines using web

crawler. From the point of view of the Wiki plat-form, robot is a normal user with specific access rights.

Fig. 3. Fact table – a description of links.

Page 117: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 311 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

The first step is to import the necessary libraries that allow us to edit the ontology while preserving the logic graph. For this task, a rdflib package is used. Then are defined namespaces used in parses ontology. After verifying that the robot correctly loaded the ontology and logged on to the designated Wiki, actual parsing begins. First in line is the class structure. An algorithm in each iteration of the loop finds a subject-object pair joined with verb such as rdfs: subClassOf. In this way, no class shall be ne-glected. It is worth noting that during transcription of ontology, no reasoning is carried out, i.e. if a relationship is not defined directly, it shall not be reflected in the ontology.

Each resulting subject-object pair will be repre-sented in CastWiki as a category page, so its name must begin with the keyword "Category". Addition-ally, to obtain the required transparency, from each URI, the natural class name is extracted. The result-ing pages are tested under four conditions. The first checks if parent category is not a class Thing (high-est class in OWL, all classes declared in any ontol-ogy are subordinate to the Class Thing). If the con-dition is not met, the child class becomes the parent class.

Entering attributes to Wiki structure is done in the same way. The algorithm takes subject-object pairs combined by a relationship: rdfs: subProper-tyOf. Such a condition, however, does not guarantee the extraction of all classes from the ontology. This is due to the fact that, in contrast to classes, attrib-utes are not grouped under one and the same parent attribute.

3.5. Semantic search

With ready-to-action ontology, the user can start

adding pages. This process is not much different from the completion of a database content in simple Wiki. The easiest way is to enter into a standard MediaWiki search engine the name of a specific term. If it has not been already included in the base, the application will ask whether to create a page to an earlier question.

Simple search by attributes, with the continu-ously expanding knowledge bases of a Wiki type, would be insufficient in the long run. Therefore CastWiki also provides the ability of search based on formal questions. For this purpose, a special syntax has been designed, similar to the solutions used in the same tags. For example, the query [[has alloying element :: Nickel]] will generate all pages,

for which the attribute "has alloying element" as-sumes the value "Nickel". Of course, the introduced phrases can be much more advanced. The syntax allows creating queries based on the logic of sets such as [[Category: Cast Iron]] [[has alloying ele-ment ::! Nickel]], which will generate a list of all pages in the category "Cast Iron" with value "nickel" for the attribute "has alloying element". In the case of attributes taking numerical value, there is the additional possibility of declaring the search ranges ([[has content of C :: > 0.1%]] [[has content of C :: <2%]]).

4. SUMMARY

The designed platform is a complete, functional tool that allows for the creation in an enterprise of new channels of communication and knowledge transfer. Such a system can be built at minimum cost - the cost is actually just the time. The platform provides employees with complete information about all the resources of knowledge that are availa-ble in the organisation, can also easily share new resources and integrate the ones already existing, but still not catalogued. The implementation of such a system can prevent employees from repeating the same job many times, give easy access to proven best practices that exist in the company, and facili-tate the development and transfer of knowledge.

Thus shaped system has a huge advantage over the dedicated systems with a ready knowledge base. It is above all much cheaper. CastWiki must be ex-tended by the staff, which takes the time, but it is cheap and easy to use. Every employee can partici-pate in the development of a knowledge base, which allows taking full advantage of the knowledge ac-cumulated in the company. At the same time it is possible in the process of implementing CastWiki to fill a basic knowledge base with the resources accu-mulated previously, or with information from other purchased systems.

The proposed tools - ontologies - can signifi-cantly affect the competitiveness of the casting plants, support knowledge management and reduce the barriers of entry for companies wishing to ex-pand the range of products. Implementation of inte-grated knowledge management systems, as well as decision support systems requires long-term invest-ments, especially experts’ time, but in the long run it could decide about the survival and competitiveness of an industrial plant.

Page 118: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 312 –

olutions proposed in this article greatly improve the search process and data distribution. They are justified by the relatively simple solutions that do not require a lot of time to assimilate, mainly owing to the fact that they are based on commonly used technologies. A few years ago, the main problem associated with the implementation of similar tech-niques was little interest from serious investors. Today, many multinational companies driving de-velopment of information technology (Google, Ap-ple) and most popular social networking sites (Face-book, last.fm) successfully use their own semantic solutions. Within the last few years, several ontolo-gy-based Wiki platforms have been created, which were used as knowledge bases not only in IT-related companies. Among them, a foundry plant could find its place without any major obstacles. As regards conversion into semantic knowledge, information used in foundry practice is in no way different from other data.

Acknowledgement. Scientific work financed

from funds for the scientific research as an interna-tional project. Decision No. 820/N-Czechy/2010/0 and 0R0B0008 01

REFERENCES

Adrian, A., Kluska-Nawarecka, S., Marcjan, R., 2007, The role of knowledge engineering in modernisation of new met-al processing technologies, Archives of Foundry en-gineering, Polish Academy of Sciences, Volume 7, Issue 2, April-June, 35/2, p. 169-172

Ciszewski, S., Kluska-Nawarecka, S., Document driven onto-logical engineering with applications in casting defects diagnostic, Computer Methods in Materials Science : quarterly / Akademia Górniczo-Hutnicza. 2004 t. 4 nr 1–2 s. 56-64.

Dobrowolski, G., Kluska-Nawarecka, S., Marcjan, R., Nawarec-ki, E., OntoGRator – an intelligent access to heterogene-ous knowledge sources about casting technology, Com-puter Methods in Materials Science: quarterly / Akademia Górniczo-Hutnicza; ISSN 1641-8581 - 2007 vol. 7 no. 2 s. 324-328.

Górny, Z., Pysz, S., Kluska-Nawarecka, S., Regulski, K.: On-line expert system supporting casting processes in the area of diagnostics and decision-making adopted to new technologies, Innowacje w odlewnictwie, Cz. 3, pod red. Jerzego J. Sobczaka. - Kraków : Instytut Odlewnic-twa, cop. 2009. S. 251–261. (in Polish).

Kluska-Nawarecka, S., Dobrowolski, G., Marcjan, R., Nawa-recki, E., From pasive to active sources of data and knowledge: decentralised information-decision system to aid foundry technologies, Kraków : Akademia Górni-czo-Hutnicza w Krakowie. Katedra Informatyki, 2002.

Kluska-Nawarecka, S., Połcik, H., Wójcik, T., Nawarecki, E.: Information-decision system aiding scientists and engi-neers: Lecture Notes in Information Sciences, ISGI 2005: International CODATA Symposium on Generali-zation and Information: Berlin, Germany, September

14–16, 2005, ed. Horst Kremers. Berlin: CODATA, 2005.

Nawarecki, E., Kluska-Nawarecka, S., Regulski, K.: Multi-aspect character of the man-computer relationship in a diagnostic-advisory system, Human – computer sys-tems interaction: backgrounds and applications 2, Pt. 1, eds. Zdzisław S. Hippe, Juliusz L. Kulikowski, Teresa Mroczek. - Berlin; Heidelberg: Springer-Verlag, cop. 2012. - (Advances in Intelligent and Soft Compu-ting ; ISSN 1867-5662; 98). S. 85-102.

Marcjan, R., Nawarecki, E., Kluska-Nawarecka, S., Dobrowol-ski, G.: Integration of the INFOCAST system databases by means of agent technology, INFOBAZY' 2002 – Ba-zy danych dla nauki: III [trzecia] krajowa konferencja naukowa: Gdańsk 24 czerwca–26 czerwca 2002 r.: ma-teriały konferencji / Politechnika Gdańska; Centrum In-formatyczne TASK; Instytut Oceanologii PAN [Polskiej Akademii Nauk]. - Gdańsk: CI TASK, 2002. S. 83-88.

Mrzygłód, B., Regulski, K.: Application of description logic in the modelling of knowledge about the production of ma-chine parts, Hutnik Wiadomości Hutnicze: czasopismo naukowo-techniczne poświęcone zagadnieniom hutnic-twa: organ Stowarzyszenia Inżynierów i Techników Przemysłu Hutniczego w Polsce - 2012 R. 79 nr 3 s. 148-151 (in Polish).

Połcik, H.: Baza znormalizowanych gatónków stopów odlewni-czych, INFOBAZY’99, Bazy Danych dla nauki, 385-390, Gdansk 1999, Centrum Informatyczne TASK (in Polish).

Regulski, K., Marcjan, R., Kluska-Nawarecka, S.: Knowledge management in casting industry processes, RMES\^Z-08: problemy upravleniâ bezopasnost'û sloznyh sistem: trudy 16 mezdunarodnoj konferencii: Mockva, dekabr' 2008 / red. N. I. Arhipovoj, V. V. Kul'by; Rossijskaâ Akademiâ Nauk [et al.]. Moskva: Rossijskij gosu-darstvennyj gymanitarnyj universitet, 2008. S. 285-289.

PLATFORMA SEMANTYCZNEJ INTEGRACJI

I UDOSTĘPNIANIA WIEDZY TECHNOLOGICZNEJ Z ZAKRESU PRZETWÓRSTWA METALI

I ODLEWNICTWA

Streszczenie Artykuł prezentuje koncepcję platformy udostępniania wie-

dzy wykorzystującą w celach integracji model ontologiczny. Platforma ma służyć celom przemysłu przetwórstwa metali: budować zintegrowaną bazę wiedzy, w której możliwe będzie wyszukiwanie semantyczne wspierane przez ontologię dziedzi-nową. Wyszukiwanie semantyczne pozwoli rozwiązać trudności spotykane w systemach klasy Information Retrieval Systems związane z polisemią i synonimami, a także umożliwi wyszuki-wanie pod względem właściwości (relacji), a nie tylko po sło-wach kluczowych. Przedstawiony zostanie model wyko-rzystujący otwartą platformę Semantic Media Wiki w połącze-niu z autorskim skryptem parsującym ontologię dziedzinową.

Received: November 16, 2012 Received in a revised form: December 5, 2012

Accepted: December 12, 2012

Page 119: CMMS_nr_2_2013

313 – 319 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

INDUSTRIAL PROCESS CONTROL WITH CASE-BASED REASONING APPROACH

JAN KUSIAK1, GABRIEL ROJEK1*, ŁUKASZ SZTANGRET1, PIOTR JAROSZ2

1 AGH University of Science and Technology, Faculty of Metals Engineering and Industrial Computer Science, Department of Applied Computer Science and Modelling

al. A. Mickiewicza 30, 30-059 Kraków, Poland 2 AGH University of Science and Technology, Faculty of Non-Ferrous Metals, Department of Physical

Chemistry and Metallurgy of Non-Ferrous Metals al. A. Mickiewicza 30, 30-059 Kraków, Poland

*Corresponding author: [email protected]

Abstract

The goal of presented work is an attempt to design an industrial control system that uses the production data regis-tered in the past during the regular production cycle. The main idea of the system is processing of the production data in order to find some registered past cases of production that are similar to the present production period. If the found pro-duction past case fulfills the requirements of the given quality criterion, the registered control signals corresponding to that case are considered as the pattern for the actual control. Such approach is consistent with the core assumption of Case-Based Reasoning, namely that similar problems have similar solutions. The paper presents preliminary results of the implementation of the CBR system to industrial control of the oxidizing roasting process of sulphide zinc concentrates. Key words: industrial process control, case-based reasoning, oxidizing roasting process of sulphide zinc concentrates, multi-agent system

1. INTRODUCTION

Preparation of zinc form sulfide concentrates is currently realized in the industry mainly through hydrometallurgical processes. The first stage of this technology is transformation of metal sulfides to oxides, which is called the roasting process and is carried out in fluidized bed furnaces. As the result of roasting of zinc sulfide concentrates in the fluidized-bed furnace zinc oxide (ZnO) is obtained in two fractions: fine and thicker dust of maximum content of sulphide sulfur content 0.6% and 0.4%. During the roasting process, the aim is to obtain a minimum content of sulphide sulfur in the composition of the product. By production of the roasting of sulphide

concentrates of zinc heat and gases are obtained, that are processed further in the sulfuric acid plant instal-lation. From a point of view of optimization the oxidizing roasting process is nonlinear and multidi-mensional process.

The oxidizing roasting process was modeled us-ing artificial neural networks, what is presented in (Sztangret et al., 2011). This model is based on arti-ficial neural networks (ANN). The goal of the artifi-cial neuron network is to generate the proper output signal that depends on the input signals, and that is close to the observed output of the modelled object. Presented in (Sztangret et al., 2011) results of mod-elling of the oxidizing roasting process using artifi-cial neural networks show usefulness of this ap-

Page 120: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 314 –

proach, especially used together with evolutionary techniques in order to optimize the industrial process control, however due to the complex nature of the modeled process, should be compared and referred to other approaches to process control. As it is pre-sented in (Rojek & Kusiak, 2012b) Case-based rea-soning (CBR) seems as one of possible techniques, that can be used at industrial control. Presented here research concerns analysis and implementation of CBR approach to control of the industrial process of the oxidizing roasting process.

2. CASE-BASED REASONING

The main paradigm of case-based reasoning (CBR) is reasoning by reusing of previous similar situations by solving a current problem. A decision system with CBR approach uses the case-base, which is collection of past made and stored experi-ence items, called past cases, or cases. Every time a new problem is solved, a past case relevant to pre-sent problem has to be selected in the case-base and next this selected case has to be adopted to current situation. Every time a new problem is solved, a new experience is retained in order to be available for future reasoning concerning future problem situa-tion. The retaining of made experiences enables incremental, sustained learning. From the general point of view the CBR approach is relaying on expe-rienced made in the past during solving of concrete problem situations, instead of using any general knowledge of a problem domain, as presented in (Aamodt & Plaza, 1994). An example of implemen-tation of CBR approach is optimization of autoclave loading for heat treatment of composite materials, where airplain parts are treated in order to get the right properties (Aamodt & Plaza, 1994). This system uses relevant earlier situations in order to give advise for the current load. Other application areas of CBR approach are help-desk and customer service, recommender system in e-commerce, knowledge and experience management, medical applications, applications in image processing, applications in law, technical diagnosis, design, planning and human entertainement (computer games, music), as it is presented in (Bergmann et al., 2009).

The main point of technically different known CBR systems is the CBR cycle. The CBR cycle is common algorithm of every CBR application and consists of 4 sequential processes (or phases) (Aamodt & Plaza, 1994):

1. Retrieve the most similar case or cases 2. Reuse the information and knowledge in that

case in order to solve the problem 3. Revise the proposed solution 4. Retain the parts of this experience in order to

use it for future problem solving The CBR cycle starts when there is a new prob-

lem to be solved. Main task in the first, retrieve process is to find k-nearest-neighbor considering a specific similarity measure. The similarity measure can be inverse Euclidean or Hamming distance or can be specific modeled according the knowledge of the domain. The similarity measure should induce a preference order in the case base taking into ac-count the new, currently solved problem. The pref-erence order should enable to select one or a small number of cases, which are relevant for the new case.

When one or several similar cases are selected in the retrieved process, the solution contained in these cases is reused to solve the current problem, what takes place at the reuse process. This process can be very simple, when the solution is returned un-changed as the proposed solution for the current case, however there are domains, which require ad-aptation of solution. There are two main ways to adapt retrieved past cases to the current problem: (1) transform the past case, (2) reuse the past method that constructed the solution.

At the revise process the solution generated at the reuse process is evaluated and in the case of un-desired evaluation there is possibility to repair the case solution using domain-specific knowledge. This phase can consist of two tasks: evaluation of solu-tion and fault repair. The evaluation task uses the results from applying of the suggested solution to the real environment, what can happen by asking a teacher or performing the task in the real world. This task is usually performed outside the CBR sys-tem and makes necessary to link the CBR system with the real world domain, which concerns the solved problem. Fault repair involves detecting of errors of the current solution and using failure ex-planation to modify the solution in a way to improve it in a way errors do not to occur.

The retain process at the CBR cycle concerns learning by retaining of current experience, what usually occurs by simply adding the revised case to the case base. Thanks to this adding, the revised solution becomes available for reuse at future prob-lem solving. As a result of the retain process a CBR system gains new experience due to and together

Page 121: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 315 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

with regular solving of current problems. In some domains of applications continuous increase of the case base cause by the retain process causes contin-uous decrease of efficiency of the retrieve phase.

3. DESIGN OF THE CBR SYSTEM FOR CONTROL OF INDUSTRIAL PROCESS

The implementation of the CBR approach to the industrial process control is illustrated with example of the oxidizing roasting process of sulphide zinc concentrates as an example of typical industrial pro-cess. The goal of control of this process is to achieve the minimal concentration of sulphide sulphur in the roasted products. All input signals of this process can be divided into three main groups: (1) independ-ent signals – chemical composition of the input zinc sulphide concentrate, (2) dependent signals – meas-ured only signals, that influence the nature of the process e.g. temperature inside the furnace, and (3) controllable signals – signals that can be set e.g. air pressure after blower. The quality criterion is mini-mal concentration of sulphide sulphur in the roasted products. This concentration is measured several times a work day (e.g. 5 or7 times a day). All inde-pendent signals (concentration of Zn, Pb, Fe and S in the input concentrate) are measured only once per day, but dependent signals are measured several time per minute. Controllable signals are set with the equal frequency to the frequency of dependent sig-nals measure.

3.1. A case in the domain of industrial control

The fundamental problem having the goal to de-

sign a CBR system for any domain of its use is de-fining, what is a case. A case relates to one single problem solved by a CBR system. Considering the oxidizing roasting process of sulphide zinc concen-trates it is possible to state, that the solved problem can be presented in the form of question: how to control the process knowing independent signals (chemical composition of the input concentrate) in order to obtain minimal concentration of sulphide sulphur in the made products. Because the chemical composition of the input concentrate is known only ones per a production day (at the beginning of a day) it is assumed that the whole day of production should be controlled in the same manner – using one single control function. This control function should take into consideration values of measured depend-ent signals and on this basis propose values of con-trollable signals. After the end of production day an

average quality measure is known, what enables to evaluate the production characterized by measured values of independent signals and used control func-tion (in the form of dependent signals and controlla-ble signals).

Presented above discussion lets to define a case as the triple problem-solution-evaluation. The problem is specified by measured independent sig-nals (chemical composition of the input concen-trate). The solution is, in other words, the control function used to production characterized by speci-fied independent signals. The control function takes as the input values of dependent signals and results in values of controllable signals, so the control func-tion can be described by dependent and controllable signals registered during past production. The eval-uation is represented by the average measure of concentration of sulphide sulphur in the made prod-ucts during the period of using the control function specified at the solution. Reassuming, a case is the data structure, that consist of: Problem – single values of independent signals

for the whole considered production day, Solution – the description of used control func-

tion in the form of values of dependent and con-trollable signals registered during considered production day,

Evaluation – average quality measure in the form of average concentration of sulphide sul-phur in the products made during considered production day. From the general point of view a case represents

one day of production. Every CBR system needs a knowledge represented in the case-base in order to propose solution for current problem. In domain of presented industrial process it is possible to use past data related to manual control done in the past. Such case-base should enable for designed system to imi-tate the manual control considering quality results which were obtained during different production days.

3.2. The retrieve phase

In the domain of control of presented industrial

process the main goal of the retrieve phase is to find a past case, which concerns similar problem to the current problem and contained in this case solution is evaluated as desirable. Similar previous solved problems to the current one are cases representing past production days with similar values of meas-ured independent signals (what means similar com-

Page 122: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 316 –

position of input materials). It is proposed to choose first a small number of past cases representing simi-lar problems (using k-nearest neighbor algorithm) and next to select among them only one that has best evaluation, what can be done in two steps: 1. Choose a number of previous cases from the

case base with the highest similarity rate, that is measured as the Euclidean distance between values of independent signals measured for the current problem and the solved problems repre-sented by the previous cases.

2. Select among chosen cases only one, which is evaluated with the most desirable average value of quality measure.

3.3. The reuse phase In the reuse phase the solution represented by

the selected past case in the previous phase should be applied to the current problem, that is control of the industrial process. The past case contains de-scription of solution in the form of dependent values and values of controllable signals. It is assumed, that controllable signals are function (named control function) of dependent values. Having goal to reuse the solution represented by the selected case, the control function used at the selected case has to be approximated and next has to be used in the control of present case of production, what can happen in two steps: 1. A model of the control function relevant for the

selected past case should be prepared with the use of values of dependent and controllable sig-nals.

2. The prepared model of control function should be used at solving of the current problem. An artificial neuron nets can be used by model-

ing and using of control function. In the first step the artificial neuron net has to be learned with values of dependent and controllable signals that are contained in the selected past case. In the second step the learned net has to be used in order to predict values of controllable signals on the base of presently measured dependent signals. During the second step all values of dependent and controllable signals should be saved in order to be used during the retain phase.

3.4. The revise phase

The revise phase assures feedback of the applied

in the reuse phase solution into the current problem

that is solved. The feedback in the case of industrial control domain is in the form of evaluations of real products made during current period of production. This evaluation has to be made outside computer system and is usually equivalent of quality measure (made by human). In the case of industrial control of the oxidizing roasting process of sulphide zinc con-centrates the quality measure is done after produc-tion time, so it is not possible any fault repair pro-cess concerning present production period.

3.5. The retain phase

The retain phase enables learning in the CBR

cycle. This phase starts, when the current problem was solved and the evaluation of this solution is known. The current case contains already the de-scription of the problem, description of the applied solution in the form of values of dependent and con-trollable signals saved during reuse phase and the evaluation in the form of average value of quality measure. In the retain phase the current case is just add to the case base and becomes one of past cases representing experience items concerning control of the industrial process, what enables the currently ended present case to be available for reuse in future problem solving process.

4. IMPLEMENTATION OF THE CBR SYSTEM

4.1. Control of oxidizing roasting process Presented above analysis concerning CBR sys-

tem in the domain of industrial control of the oxidiz-ing roasting process of sulphide zinc concentrates is implemented using agent technology, which was presented among others in (Weiss, 1999; Wooldridge, 2001). The complete functioning of the CBR system is partitioned into individual agents. Two main types of agents are functioning in the system: the Past Episode Agent, which represents one

past case, the Control Agent, which performs the CBR

operations concerning resolving solution for control of the present production period. Due to the fact, that one Past Episode Agents

represents one past case, the number of Past Episode Agents is equal to the number of past cases con-tained in case-base. The Past Episode Agent can receive messages concerning represented past case and answer to such questions providing information

Page 123: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 317 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

concerning description of problem, solution or eval-uation represented by the past case.

The Control Agent performs CBR operations, that aim to control of oxidizing roasting process. In the retrieve phase this agent finds one past case, that is relevant for the current production period. This selection is made by agents communicating in the system: first 5 Past Episode Agents are selected, that represent the most similar production periods con-cerning independent signals, second from those five selected only one is chosen, that represents the best evaluated production (as presented in subsection 3.2). In the reused phase the Control Agent uses an artificial neural network (as presented in subsection 3.3). This neural network is a multilayered percep-tron composed of neurons with sigmoid function. All neurons are located in 4 layers composed of 9, 13, 11, 7 neurons. By the modeling step a supervised learning is used with usage of data represented by the selected in previous phase case.

The main problem that we were faced by im-plementation of the software concerns the revise and retain phase, which require the real evaluation of proposed solution. This evaluation would be not a problem, when implemented system will be really a control system for the real control system. Lack of evolution, however, was not a handicap at imple-mentation of the presented system, that is solution for control of the present production period. Due to mentioned problem with evaluation of made prod-ucts under control of presented system, the revise and the retain phases are not implemented.

By implementation of the presented system JAVA and JADE (framework for agent systems) are used, as in previous works concerning industrial control presented in (Rojek et al., 2011; Rojek & Kusiak, 2012b). Using of agent technology allows to overcome many development problems, that appear by implementation of CBR systems. The CBR methodology relays on using of information, that is contained in case-base, that is just set of distributed cases concerning past situations. The number of past cases is changing due to new problems, that are suc-cessively resolved and stored in order to be used in future. Such constriction of case-base can be simple transferred to multiple agents, that represent, as it was proposed, one case. This transformation main-tains natural decomposition of problem through use of the agent technology.

4.2. Combustion control of blast furnace stoves Presented in (Sun, 2008) implementation of the

CBR methodology to control of combustion control of blast furnace stoves seems analogous to presented above work. Main problems of shown in (Sun, 2008) application are related to definition of representation of a case, case-base and all phases of implemented CBR phases. In the retrieve phase similar past cases to the current one are searched with the same meth-od, as presented in subsection 3.2 (k-nearest neigh-bor algorithm and the Euclidean distance). The reuse phase is much simpler due to the fact, that a case represents only one moment of time. The solution represented in the relevant case is just taken directly as the final control decision. Presented in section 3 work concerns a case as a whole day of production, what induces using approximation methods (e.g. neuron nets) in order to obtain temporary control decisions. The revise and retain phase are presented in (Sun, 2008) very shortly. It is assumed, that cases are evaluated later and added to case-base for future problem solving, what is similar to research present-ed in subsections 3.4-3.5.

5. ARTIFICIAL NEURAL NETWORK BASED CONTROL SYSTEM

Other approach to control of an industrial pro-cess uses a model and a optimization procedure (fig-ure 1). Implementing this approach to the considered oxidizing roasting process, the artificial neural net-work is used as a model for prediction of concentra-tion of sulphide sulphur in roasted ore. The elaborat-ed ANN model is based on the architecture of Multi-Layer Perceptron (MLP). The used ANN first should be trained in order to predict the concentration. For this step supervised learning method is used. The dataset used at training contains records, composed of the measurements of the roasting process and the resulting concentration.

After the ANN was trained, it can be used as the model, which is used by a optimization procedure. As a optimization method, particle swarm optimiza-tion (PSO) is used. The goal of optimization is to obtain values of control signals which provide min-imal concentration of sulphide sulphur in a roasted ore. PSO method is inspired by the behaviour of swarms of birds, insects or fish shoal looking for food or shelter. Every member of the swarm search-es in its neighbourhood but also follow the others, usually the best member of the swarm. In the algo-rithm based on this behaviour, the swarm is consid-

Page 124: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 318 –

ered as particles representing single solutions. Each particle is characterized by its own position and the velocity. Particles move through decision space and remember the best position they ever had. More accurate description of this method can be found in (Sztangret et al., 2009) and (Sztangret et al., 2010).

Fig. 1. Scheme of control system using an ANN model and a optimization method.

6. CONCLUSIONS

Case-base reasoning approach enables to make decision in the case of unknown model of domain of implementation. The decision is made on the basis of previous made decisions, if those decision bring desirable results. The CBR system uses experience, that is information of previous made decisions. The experience is in the form of case-base, which is used at currently solved problems. The CBR system to-gether with solving problems, simultaneously is gaining its experience by adding current problems and its solutions to the case base. Presented in this article research shows, that it is possible to imple-ment the CBR methodology to the domain of control of industrial process. Such implementation involves many design decisions according to representation of a case, construction of case-base and development of the whole CBR cycle. All that decision have to relate to the real industrial process, that is controlled.

Future works should be oriented to implementa-tion of CBR approach to real control of industrial process. Only such implementation will enable to obtain real evaluation of made decisions according to control of production. If the evaluation of control made by CBR system will be known, the revise and retain phases will be possible to realize and finally it will be possible to close the CBR cycle. Such CBR system will not only use its experience, but also will gain experience, as it is presented as the main fea-

ture of Case-base reasoning. From general point of view, work of such complete system can be analo-gous to work of an worker, that gains and uses expe-rience according to made decisions.

Acknowledgment. Financial assistance of the

MNiSzW (Działalność statutowa AGH nr 11.11.110.085).

REFERENCES

Aamodt, A., Plaza, E., 1994, Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches, AICom - Artificial Intelligence Communications, 7, 39-59.

Bergmann, R., Althoff, K. D., Minor, M., Reichle, M., Bach, K., 2009, Case-Based Reasoning – Introduction and Recent Developments, Kunstliche Intelligenz: Special Issue on Case-Based Reasoning, 23, 5-11.

Rojek, G., Kusiak, J., 2012a, Industrial control system based on data processing, Proc. Conf. ICAISC 2012, eds, Rutkowski, L., Zakopane, 502-510.

Rojek, G., Kusiak, J., 2012b, Case-Based Reasoning Approach to Control of Industrial Processes, submitted and accepted to Computer Methods in Material Science.

Rojek, G., Sztangret, Ł., Kusiak, J., 2011, Agent-based infor-mation processing in a domain of the industrial process optimization, Computer Methods in Materials Science, 11, 297-302.

Sun, J., 2008, CBR Application in Combustion Control of Blast Furnace Stoves, Proc. Conf. IMECS 2008, eds, Ao, S. I., Hong Kong, vol. I, 25-28.

Sztangret, Ł., Stanisławczyk, A., Kusiak, J., 2009, Bio-inspired optimization strategies in control of copper flash smelt-ing process, Computer Methods in Materials Science, 9, 400–408.

Sztangret Ł., Szeliga D., Kusiak J., 2010, Analiza wrażliwości jako metoda wspomagająca optymalizację parametrów procesów metalurgicznych, Hutnik – Wiadomości Hut-nicze, 12, 721-725 (in Polish).

Sztangret, Ł., Rauch, Ł., Kusiak, J., Jarosz, P., Małecki S., 2011, Modelling of the oxidizing roasting process of sulphide zinc concentrates using the artificial neural networks, Computer Methods in Materials Science, 11, 122-127.

Weiss, G., 1999, Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence, MIT Press Cam-bridge, USA.

Wooldridge, M., 2001, Introduction to Multiagent Systems, John Wiley & Sons, Inc., New York, USA.

STEROWANIE PROCESÓW PRZEMYSŁOWYCH Z PODEJŚCIEM OPARTYM NA WNIOSKOWANIU

EPIZODYCZNYM

Streszczenie Celem prezentowanej pracy jest próba zaprojektowania sys-

temu sterowania przemysłowego, który w trakcie bieżącego cyklu produkcyjnego wykorzystuje zarejestrowane w przeszło-ści dane produkcyjne. Głównym założeniem systemu jest prze-twarzanie danych produkcyjnych w celu znalezienia zarejestro-wanych przeszłych przypadków produkcji, które są podobne do bieżącego okresu produkcji. Jeśli znaleziona w bazie przeszłych

Page 125: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 319 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

przypadków produkcja spełnia wymagania danego kryterium jakości, zarejestrowane wartości sygnałów sterujących, które odpowiadają znalezionemu przypadku, uważane są za wzór dla bieżącego sterowania. Takie podejście jest zgodne z podstawo-wym założeniem wnioskowania epizodycznego (ang. Case-Based Reasoning), którym jest stwierdzenie, że podobne pro-blemy mają podobne rozwiązania. W pracy przedstawiono wstępne wyniki wdrożenia systemu CBR do sterowania procesu przemysłowego, którym jest proces utleniającego prażenia koncentratów siarczkowych cynku.

Received: December 4, 2012 Received in a revised form: December 18, 2012

Accepted: December 28, 2012

Page 126: CMMS_nr_2_2013

320 – 325 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

RULE-BASED SIMPLIFIED PROCEDURE FOR MODELING OF STRESS RELAXATION

KRZYSZTOF REGULSKI1*, DANUTA SZELIGA1, JACEK ROŃDA1, ANDRZEJ KUŹNIAR2, RAFAŁ PUC2

1 AGH University of Science and Technology, Cracow, Poland,

Mickiewicza 30; 30-059 Krakow, Poland 2 WSK "PZL - Rzeszów" S.A., Hetmańska 120; 35- 045, Rzeszów, Poland

*Corresponding author: [email protected]

Abstract

A case study of rule knowledge base on developing stress relaxation in welded components is presented in the paper. The procedure of simplified decision support model formulation – including expert knowledge externalization, formaliza-tion of knowledge, the selection of variables and identifying domain of attributes is described. A decision tree was used to support creating and visualizing of the model. The result of the work is a set of rules constituting a simplified model to predict the stress relaxation parameters of stress-relief annealing applied to welded components of PZL-10W helicopter engine produced by WSK Rzeszów. Developed application can be implemented in one of the reasoning system shells. Key words: rule-based system, knowledge base, decision support, stress-relief annealing, welding

1. CHARACTERISTICS OF THE RESEARCH WORK

The inference and the development of control models proceeds in several steps and is an iterative process where specialists perform the role of ex-perts, not only at the stage when the model is de-fined, but also on the stage when variables of a process are selected, a variable domain is determined, inference rules are proposed, and finally during model evaluation.

This process also requires a collaboration of ex-perts from various areas of expertise not only from manufacturing, but also with the knowledge engi-neering, computer science, etc. (Rutkowski, 2005).

The aim of this research is to propose a set of control rules aiming at decreasing the stress in weld-ed components on the basis of WSK Rzeszow spe-

cialists’ technology knowledge. An inference stress relaxation model is developed in collaboration with welding experts from WSK Rzeszow. The design of the model requires the determination of a model scope, the decision criterions and appropriate selec-tion of independent variables. Various materials, such as: Inconel 625, Inconel 718 and Steel 410 are considered in the numerical simulation of welding and heat treatment processes for manufacturing of turbine engine.

2. SOURCES OF TECHNOLOGICAL KNOWLEDGE UTILIZED IN THE MODEL DEVELOPMENT

2.1. Literature review The knowledge base for the assessment of

weldment integrity can be based on the measurement

Page 127: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 321 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

of residual stress and density of micro-cracks. Issues related to the relaxation of stress after welding has been discussed in numerous publications, e.g. by Tasak (2008), and Pilarczyk (1983).

Stress relief annealing leads to some stress re-laxation and also can restore ductility in brittle zones. A stress-relief annealing is the most common method for removing internal stress. Thermal an-nealing reduces the yield strength at elevated tem-peratures that results in the occurrence of plastic deformation in areas, where the second invariant of internal stresses exceeds a local yield limit. Increas-ing the annealing temperature reduces the limits of yield and the strength of steel. That situation is usu-ally preferred in the case of stress-relieving elements that are required to have good strength properties. The annealing temperature lowering could lead to insufficient stress relaxation. The most important parameters of stress-relief annealing are: an alloy composition, complexity of the shape and size of a product, heating rate, annealing temperature and cooling rate.

2.2. Knowledge base and experience of

executive team Authors’ preliminary knowledge in the area of

rule system implementation is priceless in the pre-paratory phase of a research project (Kluska-Nawarecka et al., 2007; Kluska-Nawarecka & Reg-ulski, 2007; Kluska-Nawarecka et al., 2009; Mrzygłód & Regulski, 2012; Nawarecki et al., 2012; Szeliga et al., 2011, Szeliga, 2012). During the prob-lem formulation, after literature study and discus-sions with engineers from WSK Rzeszów the major tasks were defined and the objectives were identified with selection of decision making criterions and appropriate variables, so called inference objects. The first step in developing a decision making sup-port system is to determine the inference object. There are several variants of such study based on: The prediction model: determining the quality of

the residual stress after heat treatment on the ba-sis of welding, annealing parameters and work-piece data.

The decision support model: determination of heat treatment parameters on the basis of the ex-pected properties of the material after annealing process.

The diagnostic model: determining the causes of defects in the process of heat treatment.

The decision-making control model: determina-tion of heat treatment parameters on the basis of the workpiece. Finally authors followed the fourth option, i.e.

decision making control model. The manufacturing knowledge base available in the WSK Rzeszow existing in non-formal documentations, industrial practice and observations recorded by technologists that can be collected together and formally coded in the form of data bases. Rules supporting a decision making process will be identified as the data base motor.

2.3. Internal procedures and standards

Each of the engine elements has the exact speci-

fication in the manufacturing process output. The specifications are established according to materials treatment standards. Two of the standards: American (AMS) and international (ISO) are generally availa-ble, but inner specifications are confidential. Those confidential specifications are available for the study but the knowledge base and decision making models derived from that specification are still the property of the WSK Rzeszów. Following that, decision ta-bles, decision trees and the rules are presented here without confidential details.

Heat treatment of turbine engine case is carried out after welding. It may also be used during engine overhaul when a case is repaired by welding. When the control procedure shows that weld cracks exceed security limits, a repaired element must be re-welded.

Typical heat treatment of a case consists of a vacuum annealing conducted following the proce-dure: 1. Securing of required level of a vacuum. Some

depreciation of this vacuum state is permissible during the process.

2. Heating of a furnace to a required temperature. 3. Continuous heating of a chamber to the anneal-

ing temperature and maintenance of this temper-ature for a specified period.

4. Cooling a furnace with a specific rate up to the minimum temperature and further cooling a case up to the ambient temperature. Annealing the single parts of a case and compo-

nents is carried out in a furnace with a fixture for maintaining shape and dimensions of elements in the risk of deformation due to thermal dilatation. The same device is use in a case maintenance procedure.

Page 128: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 322 –

2.4. WSK base of knowledge The information available from production engi-

neers appointed by WSK Rzeszów is valuable source of knowledge for the development of process control model. Non-formalized knowledge based on industrial experience is called by managers „tacit knowledge” and the know-how. Without the knowledge of the practical aspects of decision-making in an industrial environment, without famili-arity with the most common dilemmas emerging technology, it is impossible to develop a control model and propose rules of inference. In the later stages of the work, other sources of the knowledge play only auxiliary functions. The major task of the engineer of knowledge is the externalization of the experience of experts.

To develop the control model of stress relaxa-tion, the knowledge of production engineers were codified following a cycle of interviews and the model assessment. The cycle was repeated several times that helps in avoiding modeling errors and identification of decision rules.

3. KNOWLEDGE CODING FOR RULE-BASED EXPERT SYSTEMS

The knowledge coding is the next step in the de-sign process of the control model for welding. The objective of coding is to represent a production en-gineering knowledge in such a way that it can be implemented into the inference process. Moreover, the introduced formalism should be clearly under-stood by a production engineer who can evaluate and verify the knowledge. Several methods were used to perform that process by developing the fol-lowing objects: decision trees, decision tables, and rules of inference.

3.1. Selection of inference parameters

The first step in the design of the decision mak-

ing system is the identification of parameters i.e. variables in rules for concluding. The number of variables can be proposed, regardless of whether they will be used or not in the final model. Variables can be selected according to the future application, determining which of them will be used in relations and which will be evaluated in the inference process. A domain of each parameter/variable is determined as a field attribute. In this model some of variables could appear both as a result of conclusion or a ra-

tionale. For example: a variable specification can be a result of conclusion of the inference in the first phase and later may appear as a variable in further rules.

Production engineering experts may selected the following variables in the decision making process: specification of materials, necessity of fixture usage during stress-relief

annealing, heating rate, number of stops during the heating period for

a supply of argon.

3.2. Decision making tables One of steps of complete decision-making model

is to answer the question related to maintenance of a work piece shape, i.e. whether the fixture usage is necessary during a heat treatment? This decision is important from economical point of view, because mass of a fixture, which ranges to several kilograms, absorbs heat and therefore, the oven to reach the same annealing temperature must be heated longer than for the case of heat treatment without such de-vice.

The base surface is often the outer cylindrical surface and a cylinder base. Some parts could be supported simultaneously on various surfaces. A fixture strengthening a workpiece should be used for flexible parts. A production engineer is making a decision on which surfaces a fixture should ap-plied. Usually, those are outer surfaces of a work-piece.

Diameters of a workpiece are considered togeth-er with dimensional tolerances and shape require-ments such as flatness and roundness. Dimensional tolerances and geometrical tolerances are controlled after heat treatment. An attachment and detachment in a strengthening should be easy and unambiguous.

To eliminate thermal deflection after a heat treatment various repair operations are used, such as straightening operation for bars and tubes or spin-ning for the cylindrical parts.

The final heat treatment process is marked by the acronym, HT. When such operations are final, and there is high risk of deflection then to secure the shape within strictly prescribed tolerances a strengthening fixation must be used.

Variables consisting so called decision table does not distinguish between types of support sur-faces, e.g. cylindrical or flat. Such information is already included among others in the variable

Page 129: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 323 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

named: "deformations". The decision table, shown as the table 1, assigns a number of decision rules to the heat process scenarios. It can be read as follows: scenario 1 – when a heat treatment is in interop-

eration stage and tolerances of the supporting surface are loose, and a deflection may could exceed a tolerance, then a strengthening fixation must be applied.

source: own study

Fig. 1. A part of the decision tree to determine the heat treat-ment parameters on the basis of a workpiece-decision-making control model.

This rule can be avoided, as in this particular scenario the information on thermal deflections is redundant. Therefore, this parameter could be omit-

ted, because a decision is made on the based on the information about the heat treatment stage and ap-propriate tolerances. The rationales for each decision making should be described for each manufacturing process scenario. This redundancy will be eliminated following the future process analysis.

3.3. Construction of decision trees

The process of knowledge

acquisition from experts is much more laborious than it would appear following this study. However, results of such acqui-sition are gathered collectively in the decision-making table. For the sake of simplicity, the au-

thors decide to omit a number of steps leading to the refinement of relations and inference rules. The en-tire decision tree makes depicts rules in a compre-hensive manner. Since the whole tree exceeds one page and the information is confidential, the value of the parameter 'specifications' is presented only by the appropriate acronym or symbol (see figure 1). To describe the idea of the decision tree only a small portion of a full model is presented here only for Steel 410.

3.4. Rules of inference

Inference control rules can be generated on the

basis of a decision tree. These rules are shortened to include only the necessary conditions for achieving an inference. For example, selected few rules are the following: IF material = “Inconel 718” AND treat-

ing_stage = “final”

THEN specification = “Inc718a”

IF treating_stage = “final” AND tolerance = “tight”

AND deformation = “significant”

THEN apparatus = “yes”

IF specification = “Inc718a” AND heat-ing_speed = “6°C/min”

THEN number_of_stops = 0

IF specification = “625” AND heating_speed = “8°C/min”

AND exploitation_treatement = TRUE

THEN number_of_stops = 2

Table 1. The decision table showing a requirement for a strengthening fixation during stress-relief annealing.

Possible scenarios: 1 2 3 4 5 6 TREATING STAGE internal final

TOLERANCES OF SUPPORTING SURFACES loose tight loose tight

ANTICIPATED DEFORMATIONS YES NO YES YES NO YES

Use of stress-relief annealing apparatus NO NO YES YES NO YES

Page 130: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 324 –

Using this type of rules production engineer can decide about indicated variables such as: specifica-tion of materials; utilization of strengthening fixture during annealing, heating rate, and number of breaks application of an argon dose. Those rules are also ready to be implemented in one of expert system shells.

3.5. Example of use case

Reading a decision tree or applying some rules

of inference in a daily routine can be presented in a form of dialogue with a user. The dialogue can be advanced as follows: MODEL: What kind of a material are the

parts made of?

USER: Steel 410

MODEL: Treating stage is final or internal?

USER: Final

MODEL-CONCLUSION: You should use Specifica-tion of parameters sign Spec410a

MODEL: Is tolerance of dimensions tight or loose?

USER: Loose

MODEL: Is the risk of deformation signifi-cant?

USER: Yes

MODEL-CONCLUSION: You should use elements geometry sustaining apparatus during stress-relief annealing

MODEL-CONCLUSION: You should apply heat rate at 6 celsius degrees per minute

MODEL-CONCLUSION: You should plan 1 stop for argon application

4. APPLICATION OF KNOWLEDGE MODEL

Presented in the paper rules was implemented in the system ReBIT (Banet et al, 2011). ReBIT is the Business Rules Management Systems which com-bines the capabilities of the rule-based decision sup-port system with the expressiveness available in algorithmic programming languages. System acts as expert system shell. It implements an engine of in-ference and give an opportunity of developing a knowledge base. The system was developed in the Department of Applied Computer Science of the Faculty of Management AGH in Krakow.

Research team decided to apply forward infer-ence. Knowledge base consists on several dozens of variables and tens of rules. Decision variables are constructed from: specification (which is a set of 16

parameters of stress-relief annealing), a require-ment for a strengthening fixation, heating speed and number of stops.

The whole system with developed knowledge base was successfully implemented in WSK Rzeszów.

5. SUMMARY

The task was performed with the following works: acquisition of the knowledge from the best prac-

tice of production engineering in WSK regard-ing e.g. stress relaxation after welding of engine case and its components,

codification of the expert knowledge on the pre-sent-day process experience and ways for deci-sions of heat treatment parameters for selected materials and components;

codification and implementation of manufactur-ing know-how in a decision table, decision tree, and control rules of inference. The final step of an “expert” system develop-

ment consists of the derivation of rules for the con-trol of a stress relaxation process. Set of such rules would be used in future by process engineers for supporting decisions for the design of stress-relief annealing. Developed knowledge base is a func-tional model of stress relaxation control. The model was sent to the WSK Rzeszow for evaluation. The mayor result of this paper is an attempt to codifica-tion of previously informal expert knowledge and a proposition of inference rules. For further applica-tion, the presented scheme should be implemented in one of inference systems.

Acknowledgements. This research was carried

on within the research project of NCBiR, titled SPAW no. ZPB/33/63903/IT2/10 - 2010-06-01.

REFERENCES

Banet E., Baster B., Duda J., Gaweł B., Jankowski R., Jędrusik S., Macioł P., Macioł A., Madej Ł., Nowak J., Paliński A., Paradowska W., Pilch A., Puka R., Rębiasz B., Sta-wowy A., Śliwa Z., Wrona R., 2011, Business rules management : perspectives for application in technology management; eds. Macioł A., Stawowy A., — Kraków, AGH— ISBN 978-83-932904-0-6.

Kluska-Nawarecka, S., Marcjan, R., Adrian, A., 2007, The role of knowledge engineering in modernisation of new met-al processing technologies, Archives of Foundry Engi-neering, 7, 169-172.

Kluska-Nawarecka, S., Regulski, K., 2007, Knowledge man-agement in material technology support systems, Prob-

Page 131: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 325 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

lems of Mechanical Engineering and Robotics, 36, 73–86. (in Polish).

Kluska-Nawarecka, S., Górny, Z., Pysz, S., Regulski, K., 2009, On-line expert system supporting casting processes in the area of diagnostics and decision-making adopted to new technologies, Innowacje w odlewnictwie, ed. Sob-czak J., vol.3, Kraków: Instytut Odlewnictwa,— 251–261. (in Polish).

Mrzygłód, B., Regulski, K., 2012, Application of description logic in the modelling of knowledge about the produc-tion of machine parts, Hutnik Wiadomości Hutnicze, 79, 148–151. (in Polish).

Nawarecki, E., Kluska-Nawarecka, S., Regulski, K., 2012, Multi-aspect character of the man-computer relationship in a diagnostic-advisory system, Advances in Intelligent and Soft Computing.

Pilarczyk, J., 1983, Poradnik inżyniera – Spawalnictwo, Wy-dawnictwo: Naukowo – Techniczne, Warszawa, (in Po-lish).

Rutkowski, L., 2005, Methods and techniques of artificial intel-ligence, Wydawnictwo Naukowe PWN, Warszawa (in Polish).

Szeliga, D., Pietrzyk, M., Kuziak, R., Podvysotskyy, V., 2011, Rheological model of Cu based alloys accounting for the preheating prior to deformation, Archives of Civil and Mechanical Engineering, 11, 451–467.

Szeliga, D, 2012, Design of the continuous annealing process for multiphase steel strips, 21st International Confer-ence on Metallurgy and Materials, Brno, CD ROM.

Tasak, E., 2008, Metalurgia spawania, Wydawnictwo: Biuro Ekspertyz i Doradztwa Technicznego "Techmateks", Krakow (in Polish).

UPROSZCZONA PROCEDURA MODELOWANIA RELAKSACJI NAPRĘŻEŃ OPARTA O SYSTEM

REGUŁOWY

Streszczenie Artykuł ma na celu przedstawienie studium przypadku two-

rzenia regułowej bazy wiedzy w zakresie relaksacji naprężeń powstających w elementach spawanych. Opisana jest procedura powstawania uproszczonego modelu wspomagającego podej-mowanie decyzji – obejmująca eksternalizację wiedzy eksper-tów, formalizację wiedzy, dobór zmiennych i określanie dzie-dzin atrybutów. Jako narzędzie wspomagające proces tworzenia i wizualizacji modelu wykorzystano także drzewo decyzyjne. Wynikiem prac jest zestaw reguł stanowiący uproszczony mo-del, który na podstawie wartości kilku zmiennych zdefiniowa-nych przez użytkownika określa parametry procesu wyżarzania odprężającego stosowanego w WSK Rzeszów do produkcji części składowych silników śmigłowcowych PZL-10W. Model taki może być w przyszłości implementowany w systemach wnioskujących.

Received: September 20, 2012 Received in a revised form: November 4, 2012

Accepted: November 21, 2012

Page 132: CMMS_nr_2_2013

326 – 332 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

EXPERIMENTAL APPARATUS FOR SHEET METAL HEMMING ANALYSIS

SŁAWOMIR ŚWIŁŁO

Faculty of Production Engineering, Warsaw University of Technology, Warsaw/Poland *Corresponding author: [email protected]

Abstract

A new portable system for experimental investigation in the process of sheet metal hemming was devel-oped. In the introduction, information was provided about the need to use machine vision systems to solve problems that occur in the process of hemming. Then, a test stand designed for the practical implementation of a three-step hemming process was presented. Among the different vision systems available, a method us-ing laser light scanning for reconstruction of the geometry of the examined hemmed sample was selected. An optical system for studies of the measurement technique and a method of image analysis used in the described example of the plastic forming process were presented. In this process, first, the test sample image is record-ed, and it is analysed next to obtain information about an outline of the deformed line. Further in the text, a new method proposed by the author for the reconstruction of a 3D outline of the hemmed sample was dis-closed along with a technique to calculate the value of strain on its surface. Finally, a portable measurement system for quality control of the hemmed surface edges was shown for the industrial application. Key words: strain analysis, sheet metal hemming, experimental analysis, vision based measurement

1. INTRODUCTION

The subject of the paper relates to the proposed complex solution for quantitative and qualitative control of a three-step hemming process (figure 1). Hemming is the sheet metal forming process, which consists in flanging, followed by pre-hemming and final hemming classified by Muderrisoglu et al. (1996). At the final stage of the process, the end part of the sheet rolled over to the inside onto itself forms an angle of 180 degrees with the remaining base part of the sheet. This process is applied in the final stage of the car body production, and is used for two pur-poses: (i) to join together two parts of the sheet met-al, where one sheet fills a gap between the bent edg-es of another sheet, and (ii) as a finishing operation by which the raw sheet edge is hidden inside the

item shaped. Yet, the mechanism of the hemming process is much more complex than it might be judged from the description of a pure bending pro-cess. One of the example of the hemming process includes the door hinges, where different defor-mation mechanisms are operating.

It is a well known fact that properly designed and performed, each stage of the hemming process can effectively eliminate or at least minimise the majority of defects caused by metal deformation. For this reason, the whole range of parameters gov-erning the process of forming should be subject to very carefully monitoring, remembering that it af-fects the final product quality, mainly in terms of gaps existing between the rolled over edges of adja-cent components, or folds and fissures, i.e. the roll-in, roll-out, warp and recoil described by Livatyaliet

Page 133: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 327 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

et al. (2000). Therefore, to minimise errors occurring when the process of hemming is designed, a com-prehensive understanding of the process itself is necessary to which hitherto not much attention has been paid. This can be achieved by improving the already existing techniques or developing the entire-ly new ones, to achieve better insight into the form-ing process. The ultimate goal is to determine exper-imentally the process limit parameters, the process kinematics, and the geometry and surface quality obtainable in the operation of sheet metal hemming on the basis on research performed by Swillo et al. (2005 and 2006). Due to this it will be possible to evaluate the results of an inspection through compar-ison of model results with the experimental data generated by a specially developed vision system. The studies will enable quick and accurate analysis of the process of hemming for any geometric and material-related parameters found in the selected segments of a car body. In contrast to the time-consuming and less accurate methods of assessment based on an optical system, the proposed method using a vision system will allow an immediate anal-ysis of the finished product.

2. EXPERIMENTAL APPARATUS

A special column-like shaped device was de-signed and built to perform the hemming process; an option has also been provided for the quick setup of the device (figure 2a). The device consists of the following main parts: two columns fixed in the bot-tom plate, guide sleeves fixed in the top plate, and a forming tool. The forming elements include a die with an option allowing changes in the bending ra-dius and a punch with an option allowing changes in the pre-hemming angle. The concept of the device for the hemming process assumes an easy replace-ment of the forming elements. Moreover, the use of punch guides in the device allows precise control of the punch travel with the possibility to measure the clearance between the die and the punch. This solu-tion allows precise determination of the forming process parameters, and therefore has been used in the studies of a numerical modelling of the hemming process. The material used in the hemming test was aluminium sheet (A1050). Figure 2b shows the measurement stand for the hemming tests equipped with two systems: a vision system for recording of

the process run and data acquisition from the force sensor, and a displacement sensor. To study the hemming process (curve surface and curve edge), a special hydraulic press (applied previously in stud-

Fig. 1. Schematics of the three steps hemming process: a) flanging, b) pre-hemming, c) final-hemming.

Fig. 2. Experimental apparatus for hemming process: a) schematic of the designer apparatus, b) hemming tool.

Page 134: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 328 –

ies of the flat surface and straight edge, by Świłło et al. (2011, 2012) was used. Due to its high stiffness, large working space, the maximum pressure of 40kN and low punch operating speed during forming, the press allowed obtaining similar forming conditions as the conditions used during industrial hemming of the sheet metal. The hemming test was carried out using a force measurement system in the form of an axial strain gauge mounted on the model press. Used in combination with the displacement sensor at-tached to the press, it enabled full control of the hemming force in function of the punch position. The measurements were carried out in a Matlab/ Simulink environment that allows for block con-struction of the performed measurement tasks based on image processing and data acquisition originally proposed by Higham and Higham (2005), and Chen et al. (1999).

3. METHODOLOGY OF HEMMING ANALYSIS

Based on the industrial experience and numerous research results, it becomes obvious that the final quality in the hemming process depends on a com-plex interaction between the material properties, geometry and process parameters, as reported in many works on this subject. The studies discussed above, and a number of other related works (Livatyali et al., 2000 and Graf & Hosford, 1994) clearly indicate the need to search for proper rela-tionships between the product quality, determined by the presence or absence of such defects as cracks or folds in the sheets, and the geometry of the hemming process. It should be remembered that all of the above indicated features directly affect the final evaluation of the product quality and functionality, and thus indirectly the quality and functionality of the whole car. The methods used commonly in the industry to assess the quality of such products are either visual methods or methods using simple de-

vices such as a feeler gauge. The repeatability in such cases is unsatisfactory. This statement leads to the conclusion that it is necessary to develop a vision system based on an advanced method of measure-ment and control of all process parameters. There-fore, the aim of the project is to use all the three main parameters of the process, that is: deformation during forming, change in the shaped product geometry, quality of the shaped surface.

All these parameters can be used in a compre-hensive assessment of the hemming process quality referred to the structural parts of a car body design. The use of these three parameters demands further development and implementation of advanced measurement techniques allowing for their full iden-tification. Thus, the final task of the running project

was to design and manufacture of a portable vision measurement system to allow inspection of the cho-sen three parameters.

3.1. Geometry and deformation measuremen

The measurement of deformation in a bent sam-

ple, where the bending angle is between 20 to 180 degrees, is a serious problem due to high localisation of the non-linear in nature distribution of defor-mation. In samples with a thickness of 1 mm, the maximum deformation will be concentrated in an area of the size of micrometres. Consequently, the selected method to measure deformation should be characterised by both high resolution and high accu-racy with the ability to determine the deformation in different variations of the hemming process (the curved surface and the curved hemmed edge - 3D). Therefore, the strain measurement algorithm, pre-sented by the author in this study, is based on a pre-viously proposed solution by Swillo et al. (2005), co-called ALM (Angle Line Method). This method

Fig. 3. Vision based measurement for hemming: a) schematics for the strain and geometry analysis, b) stationary vision system.

Page 135: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 329 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

allows a continuous strain determination in the ex-amined sample, where the discretisation of meas-urements is imposed only because of the image reso-lution. Strain measurement using this method in-volves the application of a simple pattern-line onto the examined object in the area of the expected de-formation, and the line should be applied at a certain angle. Then, to identify the line, a numerical image processing is used, allowing full automation of the strain determination technique. The advantage of this method over the traditional techniques using different mesh geometry is the process of the meas-urement discretisation, which in the case of the pro-posed method has no major restrictions on account of the pattern geometry used. In addition, another advantage of ALM is its simplicity in use, involving a simple formula, which can be written with just any pen. Next, as an extension of this method a new optical configuration has been proposed by Świłło et al. (2011), allow user simultaneously calculate ge-ometry as well as deformation from a single CCD camera. The proposed method is based on an angled laser line examined element subjected to displace-ments. Figure 3 shows in detail a schematics of the stationary measurement as well as real experimental equipment. An example of this method in its practi-cal embodiment has been described in detail in

research performed by Świłło and Czyżewski (2011), Świłło et al. (2012) and Świłło (2012), where the strain was measured in the hemmed sam-ple in an area of 0.6mm, while maximum defor-mation covered the area of a width not exceeding 50 m. In addition, an experimental study of grid pattern limitation is demonstrated on figure 4a. For the specimen with the square pattern the grid shape is unrecognized, so the strain calculation cannot be calculated (figure 4b). Next a grid circle where sev-eral objects (ellipses) was recorded and analyzed. A selected circle shows strong grid defects (crack-ing) that make strain measurement for this case very difficult. In particularly, the circle shape recognition by using image processing due to such deformations could be difficult to predict. Finally, a single line pattern with no visual defect such as broken parts, could be easily recognized and analyzed in case of hemming strain measurement (figure 4b).

3.2. Surface quality measurement

Another parameter previously proposed by

Swillo et al. (2006) is used to judge a surface quali-ty for hemming process evaluation. The common practise of optimizing assessing of the hemming quality is based on human inspection of the exposed hemmed surfaces. This inconsistency in the quality

Fig. 4. Strain measurement for hemming: a) direct results comparison using three results: circle, grid and ALM, b) grid pattern images for three methods of strain measurement.

Fig. 5. Micro-crack formation measurement for the hemmed surface.

Page 136: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 330 –

control methodology results in undesirable quality variation in the hemmed parts. Many researchers used various non-optical measurement and image processing methods to study and describe fractures and cracking ( Epstein, 1993). The first reported application of the vision method for fracture analysis was by McNeill et al. (1987). Since then many varia-tions of this approach have been developed and im-plemented to determine crack length or displacement field in the region around the crack, (Livatyali et al., 2004).

In the hemming process, there is a large strain concentration at the edge of the hemmed surface, conferring considerable roughness to this surface. The size of the roughness is a function of the size of the deformation and changes gradually from the surface smooth to very rough, eventually giving rise to the formation of local cracks. In a perfectly run hemming process, the surface remains smooth. In practice, the commonly used method for the surface quality assessment is a visually adopted roughness reference level at which the product should be re-jected. This method of elimination leads to a lack of regularity and repeatability in the process of product elimination. The lack of objectivity is a fundamental error, and ultimately has an impact on the whole process of elimination. Therefore, it was necessary to propose an alternative route based on vision con-trol. The proposed method of image analysis of the hemmed surface takes into account the deformation mechanism through statistical analysis of the micro-crack formation (figure 5). To find a quantitative criterion of the surface deformation, the cumulative length of micro-cracks is determined and referred to the number of these micro-cracks. The occurrence of

local maximum in the graph proves that the maxi-mum number of micro-cracks has been formed and their fusion has started taking place. This means that there has been the localisation of deformation, which favours rather fusion of the micro-cracks already existing than the formation of new ones. As a crack formation index, the average length of micro-cracks relevant to the described maximum has been adopt-ed.

4. PORTABLE MEASUREMENT SYSTEM

As a final result of the research investigation in the area of the hemming process analysis, a portable, hand-held vision-based measurement system has been developed (figure 6a). The portable vision sys-tem is applicable in the analysis of the surface edge hemming under production conditions. All the three above described techniques have been implemented, i.e. surface inspection, geometry reconstruction and strain measurement. First, in the developed tech-nique of scanning along the edge of the inspected part, a manual capturing of the images takes place to provide information on the hemmed surface shape. For that reason, the laser line generator is located on the top of the portable system rotated relative to the camera by a specifically selected angle. Figure 6b shows the result of profile calculation for an arbitrar-ily chosen location within hemmed surface. Second, using the portable vision system with data on the average length of micro-cracks and crack initiation conditions, we are able to analyze and characterize the hemming quality for any given material and

Fig. 6. Methodology for the measurement using hand-held vision system: a) schematics of the system, b) profile and strain measurement, c) surface inspection.

Page 137: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 331 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

processing conditions. Figure 6c demonstrates the inspection measurement technique for micro-crack evaluation technique based on the use of co-axial illumination system. The third measurement takes place only in the situation when a simple pattern, i.e. an angled single line, is applied to the sheet surface in the region of the anticipated hemming defor-mation. To identity that pattern, the, improved by the author, ALM solution based on digital image pro-cessing techniques is used to provide more reliable solution, more accurate strain measurements and full automation. To summarize the whole, figure 6 demonstrates the portable vision measurement sys-tem capabilities: (a) the surface quality characteriza-tion by micro-cracks evaluation, (b) measurement of the surface geometry with profile, and (c) continu-ous strain measurement with improved ALM.

5. SUMMARY

In this paper the author presents several solu-tions that have been developed in the area of the hemming process experimental analysis. Since, the hemming deformation area is concentrated in a small corner area, advanced vision-based methods were applied with key parameters such as: strain, geome-try and quality measurement. As a result of using the surface quality evaluation method, the hemming quality could be analyzed and characterized for any given material and processing conditions. Next, a successful result of using the strain measurement method for large deformation continuous (high in-crement resolution) strain distribution, maximum strain (strain peak localization and value), were computed. Finally, a geometry reconstruction was performed by scanning laser method.

As a final results of the research investigation, a specially design portable vision–based measure-ment system has been developed to conduct all the experiments instead of a previously used stationary solutions. Currently, the hemming experimental investigation confirms that the surface strain distri-bution is a major factor including the solution to the problem of hemming diagram representation. By calculating more accurately the strain distribution using a new hand-held system and including the history of deformation, a new model of hemming limit diagram representation can be created.

Acknowledgements. Scientific work financed as a research project from funds for science in the years 2009-2011 (Project no. N N508 390737).

REFERENCES

Chen, K., Giblin, P., Irving, A., 1999, Mathematical exploration with MATLAB, Cambridge University Press.

Epstein, J.S., 1993, Experimental techniques in fracture, New York, VCH Publishers.

Graf, A. and Hosford, W., 1994, The influence of strain-path changes on forming limit diagrams of A1 6111 T4 , In-ternational Journal of Mechanical Sciences, 36/10, 897-910.

Higham, D.J., Higham, N.J., 2005, MATLAB guide, Society for Industrial and Applied Mathematics.

Livatyali, H., Müderrisolu, A., Ahmetolu, M. A., Akgerman, N., Kinzel, G. L. and Altan, T., 2000, Improvement of hem quality by optimizing flanging and pre-hemming opera-tions using computer aided die design, Journal of Mate-rials Processing Technology, 98/1, 41-52.

Livatyali, H., Laxhuber, T. and Altan, T., 2004, Experimental investigation of forming defects in flat surface–convex edge hemming, Journal of Materials Processing Tech-nology, 146/1, 20-27.

McNeill, S. R., Peters, W.H. and Sutton, M. A., 1987, Estima-tion of stress intensity factor by digital image correla-tion, Eng. Fract. Mech. 28/1, 101-112.

Muderrisoglu, A. M., Murata, M., Ahmetoglu, M. A., Kinzel G. and Altan T., 1996., Bending flanging and hemming of aluminum sheet - an experimental study, Journal of Ma-terials Processing Technology, 59/1-2, 10-17.

Swillo, S. J., Hu, S. J., Iyer, K., Yao, J., Koç, M., Cai, W., 2005, Detection and characterization of surface cracking in sheet metal hemming using optical method, Transac-tions of the North American Manufacturing Research Institute of SME, 33, 49-55.

Swillo, S., Iyer, K., and Hu, S. J., 2006, Angled Line Method for Measuring Continuously Distributed Strain in Sheet Bending, ASME Journal of Manufacturing Science and Engineering, 128, 651-658.

Świłło, S., Kocańda, A., Czyżewski, P., Kowalczyk, P., 2011, Hemming Process Evaluation by Using Computer Aided Measurement System and Numerical Analysis; Proc. Conf. Technology of Plasticity 2011, eds. Gerhard Hirt, A. Erman Tekkaya, (Wiley-VCH Verlag GmbH & Co. KGaA) Weinheim, Aachen, 633-637.

Świłło, S., Czyżewski, P., 2011, Analiza procesu zawijania z wykorzystaniem pomiarów wizyjnych i obliczeń nu-merycznych (MES), Zeszyty Naukowe, 238, Oficyna Wydawnicza Politechniki Warszawskiej, 93-98 (in Po-lish).

Świłło, S., Czyżewski, P., Kowalczyk P., 2012, An experimental study of deformation load for hamming process, Prze-gląd Mechaniczny, 5, 34-37 (in Polish).

Świłło, S., 2012, Hemming process strain measurement using Angled Line Method (ALM), 2012, Hutnik, 6, 443-446 (in Polish).

APARATURA POMIAROWA DO ANALIZY PROCESU ZAWIJANIA

Streszczenie Nowy, przenośny system doświadczalny został zapropono-

wany do analizy procesu zawijania. We wstępie przedstawiono informację o potrzebie wykorzystania systemów wizyjnych w analizie problemów występujących w procesie zawijania. Następnie, przedstawiono konstrukcję narzędzi do realizacji trzystopniowego procesy zawijania. Pośród licznych wizyjnych urządzeń pomiarowych stosowanych do pomiarów i rekonstruk-

Page 138: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 332 –

cji 3D elementów zawijanych, zaproponowana została przez autora metoda skanowania. W przedstawionym artykule, odnie-siono się do problemów zaproponowanych technik pomiaro-wych i procesu obróbki obrazu w odniesieniu do przykładów doświadczalnych. Po pierwsze, zapisany obraz próbki jest anali-zowany pod kątem jej geometrii. W dalszej części, przedstawio-no szczegóły odnośnie zaproponowanej nowej metody rekon-strukcji geometrii wraz z pomiarem wartości odkształcenia. Na koniec, przedstawiono, system przenośny kontroli jakości w ujęciu przemysłowym.

Received: October 28, 2012 Received in a revised form: December 4, 2012

Accepted: December 13, 2012

Page 139: CMMS_nr_2_2013

333 – 338 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

AN EXPERIMENTAL AND NUMERICAL STUDY OF MATERIAL DEFORMATION OF A BLANKING PROCESS

SŁAWOMIR ŚWIŁŁO*, PIOTR CZYŻEWSKI

Faculty of Production Engineering, Warsaw University of Technology, ul. Narbutta 85, 02-524 Warszawa, Poland

*Corresponding author: [email protected]

Abstract

An experimental and numerical investigation is carried out in order to determine a material deformation of a blank-ing process. A highly localized, large strain distribution during the process at the last stage of a complete martial separa-tion, has influence on the final surface quality product. Commonly using method in simulation of the blanking process is based on numerical approach. However, due to the large plastic element deformation is highly recommended to use re-meshing procedure and other estimation solutions to simulate the last stage. To verify the final results and theoretical model other method are required. The paper present some implementation of experimental investigation in the field of displacement and strain measurement using a digital image correlation technique (DIC). The authors presents an experi-mental results of 1 mm thick specimen at the planar blanking process, where different clearances were used in the de-signed, fully automated apparatus. Finally, the experimental results were compared to the FEM simulation model with a good agreement. Key words: vision system, blanking process, correlation method, strain measurement, FEM

1. INTRODUCTION

Currently, the technology of making numerous electronic components and equipment, such as en-gine rotors or transformer cores, is based on the use of a punching process. The above mentioned com-ponents are assembled by packeting a group of cut out elements. Hence, very important for the overall quality of the electrical assembly operation is the quality of single components in a packet. The limit-ing factor is too large burr on the cutting edge, which causes inaccurate adhesion of the sheet metal in a packet and serious deterioration of the quality of electrical assemblies. Finding a solution to this prob-lem is one of the key issues in this technology of making components, and one of the methods cur-rently applied is an experimental analysis of the cutting process.

Experimental analysis of the blanking process is a very complex issue and for a long time it lacked a solution due to the occurrence of large and irregu-lar deformations near the die edge and punch. For a long period of time, the method used for the analy-sis of displacements was that of visioplasticity (Sut-ton et al. 1986), unfortunately giving less accurate results and requiring time-consuming calculations. Difficult to identify patterns of the grids, the blurred images of which were subjected to image pro-cessing, did not allow obtaining satisfactory results. Hence the need has emerged to search for new solu-tions in the field of numerical analysis. An outcome of this search was the development of a method based on Fourier transform, applied in the analysis of displacement increase between the individual stages of a cutting process (Leung et al. 2004). By

Page 140: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 334 –

comparing the image of a certain stage of defor-mation with the image following immediately this stage, a distribution of displacement was obtained, which enabled the deformation size to be deter-mined. However, certain conditions had to be satis-fied to make such calculations possible. The goal was achieved by designing a special unit to carry out the punching process. With the punching process performed under static conditions, the authors were able to gain control of the image recording at a reso-lution relevant to the size of material displacement, since the proposed method required visualisation of even the smallest displacements of the material, possible to achieve only with a sufficiently large image resolution. This was due to the fact that, in-stead of the typical markers applied to the surface in the form of mesh, material texture was examined. Since that time, mainly due to the rapid development of various vision techniques and gaining access to solutions offering rapid cameras of high-resolution, many authors (Stegeman et al. 1999) have tried to solve this problem, getting results even when operat-ing on millimetre samples. However, the results ob-tained in this way were mainly based on tests carried out with the aid of specially designed instruments, taking into account the conditions adequate to vision measurements, but irrelevant to the real conditions under which processes of this type are performed. Hence, the submitted studies lacked any conclusions regarding the tool wear behaviour and analysis of the crack formation tendency, both these issues being quite fundamental in control of the tool performance and monitoring of the process run.

The authors’ proposal for the study of the cutting process relates to vision measurements taken under the real conditions of the punching operation. These conditions demand taking into account the external factors such as vibration, adequate lighting and vi-

sion access to the area of material deformation. All these factors make the analysis of the deformation process of the die-cut materials a great experimental challenge in the field of measurement techniques for both experimental and numerical methods (Makich et al. 2008, Brokken et al. 1998, and Hambli 2001).

2. EXPERIMENTAL SET-UP

The schematic representation of a measurement stand is shown in the attached figure 1. Specially designed illumination system allowing for the small measurement area and diversity of material struc-tures enables taking a sequence of images captured by the vision system with specially chosen lens and a digital camera recording in the memory hundreds of photos per second. Thus gathered information is transformed to the computer memory and subjected to further numerical analysis, taking into considera-tion two stages of the process shown in figure 1a, i.e. plastic flow in the initial phase and crack formation next. For tests an albumin plate was used. From the sheet with overall dimensions of 100x80 millimetres and 1.5 millimetres thick, strips of 1.5x7x35 mm dimensions were cut out. Tests were carried out using a specially designed blanking apparatus, with several elements such as: base, side walls and upper connecting element and bearing shell as well as slid-ing element with clamping of the upper cutting sur-face, where the sheet metal is pressed between plate and die. Initially, the blanking process was per-formed using only a hand holder. Currently, the process is carrying out using a stepper motor, with

precise punch location for each step of deformation (figure1b). A final configuration of the experimental set-up is demonstrated in figure 1c, where all the systems such as: optical, illumination and vision systems are presented.

Fig. 1. Schematic of the experimental set-up: a) blanking apparatus, b) vision system configuration.

Page 141: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 335 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

To perform an analysis of the recorded images of the surface of the cut out material, advanced nu-merical solutions of the image processing based on digital image correlation have been used. However, a high surface specimen quality is required, where material texture combine with their illumination are the major parameters as shown in figure 2. Next the optical magnification is an important parameter, since the DIC method is sensitive for the texture marks. To accurately measure the material flow on the surface of an object, a texture patterns need to be related to small group areas. To meet this require-ment, several solutions are applied, such as: dividing the examined area into small sub-groups or using high resolutions. In the tests carried out, although a low-resolution camera has been used (640x480 pixels), any possible inaccuracies in calculations were compensated by optical zoom and limiting the analysis to a small area with the highest strain con-centration (figure 3).

3. STRAIN MEASUREMENT

The measurement of deformation in a punched sample is a serious problem due to high localisation of the non-linear strain distribution. Assuming that the die-cut sheet has a thickness of 1 mm, the maxi-mum strain will be concentrated in an area not larger than tens of micrometers. Therefore, the strain measurement method should be characterised by

high resolution and high accuracy. For this reason, a method for the strain measurement has been pro-posed that demands the use of combined advanced solutions in the field of machine vision based on the correlation of two images. The numerical process of comparing two images is performed for each pixel in the examined area, which enables high-accuracy determination of the measured parameters. Due to this, the process of the measurement discretisation is imposed solely because of the image resolution. The measurement conditions, however, require that the test area was adequately illuminated and the process of numerical calculations referred to small dis-placements of the material. Figure 4 shows results of the surface discretization for the sample before and after process using virtual grid pattern. An ad-vantage of this method over the traditional tech-niques using different mesh geometry is the process of measurement discretisation, which in the case of the proposed solution is not restricted by the geome-

try of the pattern used. An additional advantage of the proposed solution is its simplicity, involving the use of a simple natural pattern resulting directly from the texture of the material.

Fig. 2. Specimen surface quality under different illumination and after surface machining.

Fig. 3. Methodology of the planar blanking process: a) blanking apparatus, b) initial step, b) final step (just before cracking).

Page 142: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 336 –

As for the kinematics calculation a method of analysis of grid has been applied, and by taking into consideration the impact of the near-by surrounding (Swillo 2001). In a mathematical formulation it means the use of a directional derivative of the gra-dient of an increment of displacement. The next surrounding in relation to the grid is conceived as neighboring points, the assessed quantity of which depends upon the position of the analyzed node. Let us choose some point xi

(n) from the neighborhood of the point xi. On the basis of directional derivative of the displacement increment vector we get:

(1)

where: 2 ( )niu is the second order increment of the

displacement vector, Δs(n) is a distance between the points ( )n

ix and ix in the direction, ( )njv expresses

vector directional cosines of the n direction. The upper braced index denotes the chosen direction. The displacement increments Δuij are unknown in this equation. The solution of the equations is avail-able when at least two directions are investigated and the least square method is used.

Finally, the total logarithmic strain was calculat-ed as presented in figure 5, showing a good agree-ment to the FEM results. In the numerical simulation a large grid pattern was used intentionally (similar to the virtual grid – figure 4) in order to verify the im-age processing procedure that has been performed based on the correlation method.

4. DISCUSSION OF THE RESULTS

Next, additional experimental investigation was conducted for a planar blanking process to determine an influence of the clearance for the material frac-ture. Ten sets of experiments were conducted in the range of 0.035mm to 0.485mm of clearance (figure 6). During these experiments the clearance between the material and the punch was measure and the blanking process was recorded in computer memory. The punch penetration for each case were obtained every time up to fracture. Figure 7 shows that the relation between the punch penetration and clear-ance is most likely linear. That prediction could be successfully implemented in numerical calculation.

2 ( )( )

, ( ) for , 1, 2,3 n

n ij i j n

uu i js

Fig. 4. Surface discretization using digital image correlation method (virtual grid pattern applied to the real surface: a) initial stage, b) final stage.

Fig. 5. Results of the True Strain for the planar blanking process: a) experiment, b) FEM.

Page 143: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 337 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

5. SUMMARY

In the currently ongoing project, the deformation in a planar blanking pro-cess was monitored up to fracture. The DIC method was used to numerically control the defamation and the FE method was compare to experimental results. The proposed automatic vision system enabling the realization of measurements and calculations in a quick and precise manner for the blanking process even for the small (less than 1 mm) thick. The experi-mental examples presented in this paper referring to the two dimensional displacement anal-yses (according to the function (1)) and strain meas-urements using grid method. The results indicate that there are ample possibilities in the field of ex-perimental analysis of the material flow, and are a valuable tool for verifying numerical methods.

Acknowledgements. Scientific work financed

as a research project from funds for science in the years 2011-2013 (Project no. N N508 628140).

REFERENCES

Brokken, D., Brekelmans, W.A.M., Baaijens, F.P.T., 1998, Numerical modeling of the metal blanking process, Journal of Materials Processing Technology, 83, 192–199.

Hambli, R., 2001, Comparison between Lemaitre and Gurson damage models in crack growth simulation during blanking process, International Journal of Mechanical Sciences, 43, 2769–2790.

Leung, Y.C., Chan, L.C., Tang, C.Y., Lee, T.C., 2004, An effec-tive process of strain measurement for severe and local-

ized plastic deformation, International Journal of Ma-chine Tools and Manufacture, 7-8, 669-676.

Makich, H., Carpentier, L., Monteil, G., Roizard, X., Chambert, J., 2008, Metrology of the burr amount - correlation with blanking operation parameters (blanked material – wear of the punch), Int. J. Mater. Form., 1, 1243-1246.

Stegeman Y.W., Goijaerts A.M., Brokken D., Brekelmans, W.A.M., Govaert, L.E., Baaijens F.P.T., 1999, An ex-perimental and numerical study of a planar blanking process, J. Mat. Proc. Techn., 87, 266-276.

Sutton, M.A., Mingqi, Ch., Peters, W.H., Chao Y.J., McNeill, S.R., 1986, Application of an optimized digital correla-tion method to planar deformation analysis, Image and Vision Computing, 3, 143-150.

Swillo, S., 2001., Automatic of strain measurement by using image processing. Proc. Conf. Engineering Design and Automation 2001, eds, Parsaei, H.R., Gen, M., Leep, H.R., Wong J.P., Las Vegas, Nevada, 272-277.

DOŚWIADCZALNA I NUMERYCZNA ANALIZA PROCESU CIĘCIA W POMIARACH DEFORMACJI

Streszczenie Doświadczalne i numeryczne badania zostały przeprowa-

dzone, w celu określenia wielkości deformacji w procesie cięcia. Duże wartości odkształcenia połączone z ich koncentracją na niewielkich obszarach prowadzące do procesu rozdzielenia

Fig. 6. Set of experimental results for different clearance (0.035-0.485 mm), monitored up to fracture.

Fig. 7. Influence of clearance on the material punch penetration.

Page 144: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 338 –

materiału mają duży wpływa na końcową jakość wyrobu. Tra-dycyjne metody analizy tych zjawisk opierają się wykorzystaniu metod numerycznych. Z uwagi jednak na dużą koncentrację odkształceń zalecane jest weryfikowanie tych wyników innymi metodami doświadczalnymi. W artykule przedstawiono możli-wość zastosowania numerycznej obróbki obrazu z zastosowa-niem korelacji w pomiarach pól przemieszczeń i odkształceń. Zaprezentowano wyniki obliczeń doświadczalnych dla procesu cięcia dla różnych wielkości luzów. Wyniki pomiarów doświad-czalnych zostały zestawione z symulacja komputerową MES.

Received: October 17, 2012 Received in a revised form: October 22, 2012

Accepted: November 5, 2012

Page 145: CMMS_nr_2_2013

339 – 344 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

MODELLING OF STAMPING PROCESS OF TITANIUM TAILOR-WELDED BLANKS

PIOTR LACKI*, JANINA ADAMUS, WOJCIECH WIĘCKOWSKI, JULITA WINOWIECKA

Częstochowa University of Technology, ul. Dąbrowskiego 69, 42-201 Częstochowa, Poland *Corresponding author: [email protected]

Abstract

In the paper some numerical simulation results of sheet-titanium forming of tailor-welded blanks (TWB) are present-ed. Forming the spherical caps from the uniform and welded blanks are analysed. Grade 2 and Grade 5 (Ti6Al4V) titani-um sheets with thickness of 0.8 mm are examined. A three-dimensional model of the forming process and numerical sim-ulation are performed using the ADINA System v.8.6, based on the finite element method (FEM). An analysis of the me-chanical properties and geometrical parameters of the weld and its adjacent zones are based on the experimental studies. Drawability and possibilities of plastic deformation are assessed based on the comparative analysis of the determined plastic strain distributions in the drawpiece material and thickness changes of the cup wall. The preliminary experimental studies confirm correctness of the assumptions in the presented numerical forming process. The results obtained in the numerical simulations show some difficulties occurring in forming of welded blanks and provide important information about the process course. They might be useful in design and optimization of the forming process. Key words: TWB blanks, sheet-metal forming, FEM modelling, titanium sheet

1. INTRODUCTION

Tailor-Welded Blanks (TWB) become more popular in industrial applications in these sectors where reduction of weight and manufacturing costs are important. They are of particular interest in au-tomotive and aircraft industry where there is the growing demand for shell parts (drawpieces) meet-ing specific functional properties which include low fuel consumption and sufficient strength of elements responsible for usage safety (Hyrcza-Michalska & Grosman, 2008; Sinke et al., 2010; Schubert et al., 2001).

Reduction of production costs for elements made of TWB blanks results from limitation to material usage and number of required forming operations, and consequently decline in the demand for tools. Application of TWB blanks allows for achieving in one operation drawpieces characterized by mixed

strength and functional properties. It also allows for reduction of discards from cutting and blanking, and decrease in number of parts needed to produce com-ponent. It is estimated that application of TWB blanks can reduce the number of required parts to 66% and reduce the weight by half (Qiu & Chen, 2007; Babu et al., 2010; Meinders et al., 2000).

Application of welded blanks for products man-ufactured with use of stamping process requires solving many problems, especially in case of form-ing hard-to-deform sheets, such as alpha and beta titanium alloys. The presence of the weld (its geo-metric parameters) of different (generally lower) plasticity compared to the base material and hetero-geneity of stamped blank lead to change in material deformation scheme in comparison with the defor-mations that occur in a homogeneous material. This is due to weld dislocation, whose direction and mag-

Page 146: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 340 –

nitude depend on differences in mechanical proper-ties and thickness of welded materials (Hyrcza-Michalska & Grosman, 2008; Babu et al., 2010; Kinsey et al., 2000).

In order to evaluate suitability of welded blanks for the forming processes, it is necessary to carry out several studies, including numerical simulations of the process, that will allow for prediction of sheet behaviour in consecutive stages of the forming pro-cess (Ananda at al., 2006; Qiu & Chen, 2007; Meinders et al., 2000; Babu et al., 2010; Rojek, 2007; Hyrcza-Michalska et al., 2010; Lisok & Piela, 2003; 2004; Więckowski et al., 2011; Zimniak & Piela, 2000).

The increase in demand, among others in aircraft industry, for structural elements with specific func-tional properties leads to a growth of interest in sheet-titanium forming. Generally, titanium Grade 2 sheets have good drawability however produced drawpieces are characterized by low strength. On the other hand titanium Grade 5 sheets have higher strength than titanium Grade 2 sheets and they have low propensity to plastic deformation and this limits their application in forming processes (Adamus, 2010; 2009 a; 2009 b).

2. GOAL AND SCOPE OF THE WORK

A goal of the paper is evaluation of changes in deformation and displacement scheme of TWB blank material in consecutive stages of the forming process, using numerical simulation and experi-mental verification of changes in the wall thickness distribution in the drawpiece. In this study the nu-merical simulation of drawing spherical cap from welded sheets made of titanium Grade 2 and Grade 5 of the same thickness was performed, in order to evaluate its drawability and formability in traditional stamping processes. Additionally calculations for the

uniform sheets Grade 2 and Grade 5 were per-formed. Experimental studies are designed to con-firm the validity of the assumptions made in the numerical model of the process (figure 1a).

Grade 2 and Grade 5 materials were joined using electron beam welding (EBW) technology. EBW causes some changes in material microstructure. Analysis of the joint microstructure shows occur-rence of 5 zones – from the left: base material – Grade 5, heat affected zone (HAZ) in Grade 5, zone of joint penetration, heat affected zone in Grade 2 and base material - Grade 2. The zone of microstruc-ture changes has a width of less than 3 mm. HAZ in Grade 2 is wider than HAZ in Grade 5. Its width is of ~2282m, while width of HAZ in Grade 5 to-gether with zone of joint penetration is of ~553 m. Titanium Grade 5 has a globular, fine-grained struc-ture. Grains of phase with separation of phase on the grain boundaries are visible. Higher magnifica-tion shows a change of globular microstructure into lamellar one on the transition of HAZ in Grade 5 into the joint penetration zone. Microstructure of the border zone between the joint penetration and HAZ in Grade 2 is more evolute than the border zone be-

tween HAZ in Grade 5 and the joint penetration zone. Microstructure of HAZ in Grade 2 shows big recrystallized grains of phase. phase grains with lenticular grains represent the microstructure of base material Grade 2. Rectilinear shape of grain bounda-ries is typical for recrystallized grains. Microstruc-ture of electron beam welded joint is shown in figure 1b.

Fig 1. a) Drawpiece obtained during experimental research, b) microstructure of electron beam welded joint.

Page 147: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 341 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

3. NUMERICAL MODEL

A three-dimensional model of the stamping pro-cess was developed. The model comprises material of the welded blank and the stamping tool consisting of a die, a punch and a blank-holder. FEM geometry model is shown in figure 2.

In the calculations all elements corresponding to the tool were assumed to be perfectly rigid and for elements corresponding to the deformed sheet an isotropic elastic-plastic material model was applied (bilinear plastic material model).

Mutual displacement of tool elements against each other was realised by immobilising die and applying the displacement to the punch in the direc-tion of X axis. In the case of the blank-holder its progressive motion was limited by a hold-down force Fd. A proper selection of the blank-holder force prevents wrinkling of the flange material (fig-ure 3), and it also has a significant impact on the distribution of drawpiece wall thickness. An optimal value of the blank-holder force was determined based on the preliminary numerical simulation of the stamping process.

Discretization of the blank material (TWB blanks) for the stamping process, in the form of a disc with diameter dk = 60 mm, was performed using four-node shell elements of specified thick-ness.

Modeling of the welded blank material required distinction of appropriate zones and taking into ac-

count different material properties in the weld vicini-ty. In the presented model 5 zones were distin-guished: weld zone (W), two heat affected zones (HAZ) located symmetrically on both sides of the weld (HAZ1, HAZ2) and two zones representing base materials (M1, M2) - figure 2.

Measurements of the zones were performed during observation of the weld cross-section structure. In the calculations a constant thickness of the weld and heat-affected zones, which equals to thickness of the welded blanks – 0.8 mm, was assumed. Some im-portant geometric parameters of the model are presented in table 1. In the analysed case the weld was located in the drawpiece cen-tre.

A contact interaction between the tool and the blank material plays an important role in the forming process (Adamus, 2010; 2009 a). In the nu-merical calculations a friction coeffi-cient .1 was set for contact sur-faces between the die, blank and blank-holder, where the working surfaces were lubricated, and the friction coefficient . was set

for the contact surface “punch – deformed material (blank)” without lubrication.

Calculations were performed using the ADINA System v. 8.6, based on FEM, which allows for non-linear description of material hardening and the con-tact between the tool and the forming blanks.

Table 1. Parameters assumed in FEM model for the stamping process of TWB blanks.

Parameter value blank diameter dk clearance between punch and die l = dm-ds punch radius rs die fillet radius rm blank thickness g weld width W heat-affected zone width HAZ1 heat-affected zone width HAZ2 blank-holder force Fd punch path hs

60 mm 2 mm

16 mm 4 mm

0,8 mm 1,9 mm 1,7 mm 1,0 mm 3000 N 20 mm

The mechanical properties, which are required

for performing calculations, of base material, heat affected zone and weld zone were determined based on the uniaxial tensile test as well as on the basis of

Fig. 2. A discrete model of the forming process of a spherical cup made of TWB blank with the specified 5-zone model of the welded blank.

Page 148: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 342 –

changes in hardness distribution within the weld cross-section. Test specimens were prepared using TIG welding method. The mechanical properties of the material in the weld zone were estimated based on the relationship between the hardness and strength of the material assuming that the material yield stress is in direct proportion to its hardness. The assumed mechanical properties are summarized in table 2.

Table 2. Experimentally determined material properties of Grade 2 and Grade 5 titanium, weld materials, and HAZ mate-rial.

Material/zone

Tensile strength

Rm [MPa]

Yield strength

R0,2 [MPa]

Young’s modulus E [GPa]

Poisson’s ratio

M1-GRADE 2 316.6 236.8 110 0.37

M2-GRADE 5 1002.4 964.3 110 0.37

HAZ1 442.8 368.3 110 0.37

HAZ2 798.5 747.7 110 0.37

W 518.5 375.0 110 0.37

4. RESULTS

Figure 2 shows the shape of the drawpiece ob-tained during the numerical simulation of stamping process of TWB blank.

The numerical calculation results of plastic strain [-] and thinning of the drawpiece wall as a result of stamping process are shown in figures 4-6. In the case of stamping process of uniform Grade 2 blank it can be observed that the plastic strain dis-tribution in the blank material is uniform and circu-lar (figure 4a), and is accompanied by uniform thin-ning of the drawpiece wall (figure 4b). In the case of the forming process of uniform Grade 5 blank it is seen that concentration of plastic strains is in a pole of the cup. In this area it is seen a considerable mate-rial thinning (figure 5a and 5b).

The numerical simulation of TWB blank form-ing shows that the weld moves in the direction of Grade 5 material as punch hollows into the de-formed blank (figure 6).

Fig. 3. The drawpiece shape obtained in numerical simulation of the stamping process of the welded blank a) blank-holder force 1000N, b) blank-holder force 3000N.

Fig. 4. Numerical simulation results of stamping process of spherical cup made of Grade 2 blank at the punch penetration of 10 mm: a) plastic strain distribution [-], b) material thinning [mm].

Page 149: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 343 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

As a result of weld displacement, plastic strains increase in more deformable material and decrease in less deformable material (figure 6a). It should also be noted that in the area near the drawpiece top (on the border between more deformable material and the heat affected zone) there is a local increase in strains and significant thinning of the drawpiece material (figure 6a and 6b). This might indicate that there is a possibility of drawpiece weakening and possible loss of material continuity in this area.

5. SUMMARY

The main goal of the study was to develop a numerical model of stamping process of titanium TWB blanks. The performed simulations (FEM) of the process allow for analysis of deformation intro-duced into material during forming process and drawability assessment of the welded blanks. In future these studies will be focused on a more accu-rate description of material mechanical characteris-tics in the heat affected zones and weld, which will allow for further improvement.

The calculations confirmed experimental results that stamping of titanium welded blanks that are char-acterized by different strength properties, using rigid tools, is much more difficult than stamping of the

uniform blanks. Comparison of strain distribution in the drawpiece made of a homogeneous material with those found in drawpiece made of TWB blanks shows that the presence of weld with different strength prop-erties, introduces irregularity in the strain scheme in the deformed blank. It can be observed that there is limited formability in the zone corresponding to weld and that there is movement of this zone in the direc-tion of less deformable material.

The simulation results show the efficiency of applying numerical calculations to studying stamp-ing processes of TWB blanks. The results provide important information about the process and may be useful for the design and optimization of the process run (selection of appropriate process parameters such as: blank-holder force, lubrication conditions etc.).

Acknowledgements. Financial support of Struc-

tural Funds in the Operational Programme - Innova-tive Economy (IE OP) financed from the European Regional Development Fund - Project "Modern ma-terial technologies in aerospace industry", Nr POIG.01.01.02-00-015/08-00 is gratefully acknowl-edged.

Fig. 5. Numerical simulation results of stamping process of spherical cup made of Grade5 blank as the punch penetrates the depth of 10 mm: a) distribution of plastic strains [-], b) material thinning [mm].

Fig. 6. Numerical simulation results of stamping process of spherical cap made of welded blank Grade2||Grade5 for punch penetration of 10 mm: a) distribution of plastic strain [-], b) material thinning [mm].

Page 150: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 344 –

REFERENCES

Adamus J., 2010, The analysis of forming titanium products by cold metalforming, Monografie nr 174, Wyd. Politechniki Częstochowskiej (in Polish).

Adamus J., 2009 a, Stamping of the Titanium Sheets. Key Engi-neering Materials, http://www.scientific.net

Adamus J., 2009 b, Theoretical and experimental analysis of the sheet-titanium forming process, Archives of Metallurgy and Materials, 54/3.

Ananda D., Chena D.L., Bhole S.D., Andreychuk P., G. Bou-dreau, 2006, Fatigue behavior of tailor (laser)-welded blanks for automotive applications, Materials Science and Engineering, A 420, 199–207.

Babu Veera K., Narayanan Ganesh R., Kumar Saravana G., 2010, An expert system for predicting the deep drawing behavior of tailor welded blanks, Expert Systems with Applications, 37.

Hyrcza-Michalska M., Grosman F., 2008, Formability of laser welded blanks, Proc. Conf. 17th International Scientific And Technical Conference “Design and technology of drawpieces and die stampings”, Poznań:INOP (in Polish).

Hyrcza-Michalska M., Rojek J., Fruitos O., 2010, Numerical simulation of car body elements pressing applying tailor welded blanks – practical verification of results, Ar-chives of Civil and Mechanical Engineering, 10/4.

Kinsey B., Liu Z., Cao J., 2004, A novel forming technology for tailor-welded blanks, Journal of Materials Processing Technology, 99.

Lisok J., Piela A., 2004, Model of welded joint in the metal charges used for testing pressformability, Archives of Civil and Mechanical Engineering, 4/3.

Lisok J., Piela A., 2003, Model złącza spawanego we wsadach do tłoczenia blach „tailored blanks”, Przegląd spawal-nictwa, 6 (in Polish).

Meinders T., van den Berg A., Huetink J., 2000, Deep drawing simulations of Tailored Blanks and experimental verifi-cation, Journal of Materials Processing Technology, 103.

Qiu X.G., Chen W.L., 2007, The study on numerical simulation of the laser tailor welded blanks stamping, Journal of Materials Processing Technology, 187–188.

Rojek J., 2007, Modelling and simulation of complex problems of nonlinear mechanics using the finite and discrete ele-ment methods, Prace IPPT IFTR REPORTS 4 (in Polish).

Schubert E., Klassen M., Zerner C., Walz C., Seplod G., 2001, Light-weight structures produced by laser beam joining for future applications in automobile and aerospace in-dustry, Journal of Materials Processing Technology, 115.

Sinke J., Iacono C., Zadpoor A.A., 2010, Tailor made blanks for the aerospace industry, Int. J. Mater. Form, 3/1.

Więckowski W., Lacki P., Adamus J., 2011, Numerical simula-tion of the sheet-metal forming process of tailor-welded blanks (TWBs), Rudy Metale, 56/11 (in Polish).

Zimniak Z., Piela A., 2000, Finite element analysis of a tailored blanks stamping process, Journal of Materials Pro-cessing Technology, 106.

MODELOWANIE PROCESU TŁOCZENIA

SPAWANYCH BLACH TYTANOWYCH TYPU TWB

Streszczenie W artykule przedstawiono wyniki symulacji numerycznych

procesu tłoczenia spawanych blach tytanowych typu TWB (Tailor-Welded Blanks). Przeprowadzono analizę kształtowania czaszy kulistej z wsadu spawanego oraz materiałów jednorod-nych. Badano blachy tytanowe Grade2 i Grade 5 o grubości 0.8 mm. Przestrzenny model procesu tłoczenia oraz obliczenia numeryczne wykonano przy użyciu programu ADINA v. 8.6, bazującego na metodzie elementów skończonych (MES). Oceny właściwości mechanicznych i parametrów geometrycznych spoiny oraz obszarów jej przyległych dokonano na podstawie badań doświadczalnych. Dokonano oceny tłoczności oraz moż-liwości kształtowania plastycznego badanych materiałów po-przez analizę porównawczą wyznaczonych rozkładów odkształ-ceń plastycznych w materiale wytłoczek oraz zmiany grubości ścianek wytłoczek. Prowadzone równolegle wstępne badania doświadczalne potwierdziły słuszność przyjętych założeń w zaprezentowanym modelu numerycznym procesu tłoczenia. Uzyskane na drodze symulacji wyniki wskazują na trudności występujące podczas kształtowania blach spawanych oraz do-starczają istotnych informacji o przebiegu procesu, przez co mogą być przydatne na etapie projektowania i optymalizacji procesów tłoczenia.

Received: October 16, 2012 Received in a revised form: November 22, 2012

Accepted: November 9, 2012

Page 151: CMMS_nr_2_2013

345 – 350 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

FIRST PRINCIPLES PHASE DIAGRAM CALCULATIONS FOR THE CdSe-CdS WURTZITE, ZINCBLENDE AND

ROCK SALT STRUCTURES

ANDRZEJ WOŹNIAKOWSKI1, JÓZEF DENISZCZYK1, OMAR ADJAOUD2,4, BENJAMIN P. BURTON3

1 Institute of Materials Science, University of Silesia, Bankowa 12, 40-007 Katowice, Poland 2 GFZ German Research Centre for Geosciences, Section 3.3, Telegrafenberg,

14473 Potsdam, Germany 3 Ceramics Division, Materials Science and Engineering Laboratory, National Institute of Standards

and Technology, Gaithersburg, Maryland 20899-8520, USA 4 Present address: Technische Universität Darmstadt, Fachbereich Material- und Geowissenschaften,

Fachgebiet Materialmodellierung, Petersenstr. 32, D-64287 Darmstadt, Germany *Corresponding author: [email protected]

Abstract

The phase diagrams of CdSe1-xSx alloys were calculated for three different crystal structure types: wurtzite (B4); zinc-blende (B3); and rocksalt (B1). Ab initio calculations of supercell formation energies were fit to cluster expansion Hamiltonians, and Monte Carlo simulations were used to calculate finite temperature phase relations. The calculated phase diagrams have symmetric miscibility gaps for B3 and B4 structure types and a slightly asymmetric diagram for B1 structure. Excess vibrational contributions to the free energy were included, and with these, calculated consolute tempera-tures are: 270 K for B4; 300 K for B3; and 270 K for B1. Calculated consolute temperatures for all structures are in good quantitative agreement with experimental data. Key words: clamping, groove rolling, FEM

1. INTRODUCTION

The cadmium chalcogenide CdSe1-xSx semicon-ducting alloy is characterized by a variable direct band gap which can be tuned by alloying, from 1.72 eV for CdSe to 2.44 eV for CdS. Because of excellent properties Cd(S,Se) is used in optoelec-tronic devices, photoconductors, gamma ray detec-tors, visible-light emitting diodes, lasers and solar cells (Xu et al., 2009; and references cited therein). CdSe1-xSx solid solutions have attracted great interest in recent years from both experimental and theoreti-cal points of view (Xu et al., 2009; Mujica et al., 2003; Wei & Zhang, 2000; Banerjee et al., 2000;

Tolbert & Alivisatos, 1995; Hotje et al., 2003, Deli-goz et al., 2006).

It is known that CdS and CdSe occur at normal conditions both in the wurtzite and metastable zinc-blende structures. (Mujica et al., 2003; Madelung et al., 1982). Depending on the growth conditions, the CdSe (CdS) can be synthesized in the B4, or in the metastable B3-type structure either by molecular-beam epitaxy, or by controlling the growth tempera-ture (Wei & Zhang, 2000). The equilibrium zinc-blende structure is observed in CdS nanostructures (Banerjee et al., 2000). Under high pressure, both B3 and B4 structures convert to the denser rocksalt-

Page 152: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 346 –

structure phase (Mujica et al., 2003; Tolbert & Alivisatos, 1995; Hotje et al., 2003).

Recent measurements of formation enthalpies (Hf) for CdSxSe1-x B4-type solid solutions, reported by Xu et al. (2009), indicated that within experi-mental error Hf = 0 at 298 K. This may indicate that, at least above room temperature, that CdS and CdSe form an ideal solution in the B4- structure-type; despite differences in molar volume (Vmol,CdSe = 33.727 cc/mol, Vmol,CdS = 29.934 cc/mol, Davies, 1981) and anion radii (RCdSe = 1.91 Å, RCdS = 1.84 Å, Jug & Tikhomirov, 2006) (Xu et al., 2009). These measurements did not show the presence of a misci-bility gap above 298 K, i.e. indicating that either: 1) the blocking temperature for Se/S diffusion is above TC; or 2) the consolute temperature for CdSxSe1-x in B4 structure must be below room temperature. The T-x phase diagram of the CdSe-CdS system was the subject of theoretical ab initio studies (Ouendadji et al., 2010; Breidi, 2011; Lukas et al., 2007). In both cases (B3 and B4), only formation energies (at x = 0, 0.25, 0.5, 0.75 and 1.0) were considered, while ex-cess vibrational free energy contributions were ne-glected. In Ref. (Ouendadji et al., 2010) only B3 structure was investigated, while in (Breidi, 2011) phase diagrams for both B3 and B4 structures were determined. Both studies predict miscibility gaps. For the B3-type structure the consolute temperatures (TC) reported by Ouendadji et al. (2010) and Breidi (2011) are: TC = 315 K and 228 K, respectively. Both predicted consolute temperatures differ from the critical temperature (TC = 298 K) reported by Xu et al. (2009). The difference between TC as calculat-ed by Ouendadji et al., (2010) and Breidi (2011) originates from the different ab initio computational setup, and different choices of supercells for which formation energies were calculated (this difference indicates that the sets of formation energies for at least one of these calculations, and probably both, are based on sets of formation energies that are too small to yield converged effective Hamiltonians). For the B4-type structure Breidi (2011) reports TC = 225 K < TC = 228 K for the B3 structure.

The aim of this study is to compare well con-verged calculations of CdSe-CdS phase diagrams in all three crystal structure types: B1, B3 and B4. Both configurational and excess vibrational contributions to the free energy are considered. Sufficiently large sets of formation energies are used, that one can have reasonable confidence that calculated phase diagrams faithfully reflect density functional theory (DFT) energetics.

2. COMPUTATIONAL DETAILS

Calculations of formation energies, defined as CdSeCdSSeCdSf ExxEEE

xx

1

1, were per-

formed using the Vienna ab initio Simulation Pack-age VASP (Kresse & Hafner, 1993, 1994; Kresse & Furthmüller, 1996a, 1996b) implementing the Blöchl’s projector augmented wave approach (Blöchl, 1994), with the generalized gradient ap-proximation for exchange and correlation potentials. Valence electron configurations for the pseudopo-tentials are: Cd = 4d105s2, Se = 4s24p4 and S = 3s23p4. All calculations were converged with respect to gamma centered k-point sampling, and a plane-wave energy cutoff of 350 eV was used which yields values that are converged to within a few meV per atom. Electronic degrees of freedom were optimized with a conjugate gradient algorithm. Both cell parameters and ionic positions were fully relaxed for each superstructure of underlying B1, B3 and B4 crystal structures.

Based on the VASP results, the First Principles Phase Diagram calculations were performed with the use of Alloy Theoretic Automated Toolkit (ATAT) software package (van de Walle & Ceder, 2002a; van de Walle et al., 2002; van de Walle & Asta, 2002). The VASP calculations were used to con-struct cluster expansion (CE) Hamiltonian in a form of polynomial in the occupation variables:

'

i

iJmE (Sanchez et al., 1984),

where α is a cluster defined as a set of lattice sites, mα denote the number of clusters that are equivalent by symmetry, summation is over all clusters α that are not equivalent by a symmetry operation and an average is taken over all clusters α′ that are equiva-lent to α by symmetry. The Effective Cluster Inter-action (ECI) coefficients, Jα, embody the infor-mation regarding the energetics of an alloy. In our investigations the well-converged cluster expansion system required calculation of the formation energy for 30-50 ordered superstructures. The predictive power of cluster expansion is controlled by cross-

validation score 21

1

2)( )ˆ(

n

iii EE

n1CVS , where

Ei is an ab inito calculated formation energy of su-perstructure i, while )(

ˆiE represent the energy of

superstructure i obtained from CE with the use of the remaining (n – 1) structural energies. The free ener-

Page 153: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 347 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

gy contributed by lattice vibrations was introduced employing the coarse-graining formalism (van de Walle & Ceder, 2002b). For each superstructure the vibrational free energy was calculated within the quasi-harmonic approximation with the application of the bond-length-dependent transferable force constant approximation (van de Walle & Ceder, 2002b).

The phase diagram calculations were performed with the use of the Monte Carlo thermodynamic integration within the semi-grand-ensemble (van de Walle & Asta, 2002). In this ensemble, for tempera-ture (T) and chemical potentials () imposed exter-nally, the internal energy (Ei) and concentration (xi) of constituents of an alloy with fixed number of atoms (N), are allowed to fluctuate. The thermody-namic potential (per atom) associated with the semi-grand-canonical ensemble can be defined in terms of the partition function of the system in the form presented in equation (1) (van de Walle & As-ta, 2002):

)(exp(ln1),(i

ii xENN

, (1)

where the summation is over different atomic con-figurations (alloy states) and )/(1 TkB (kB is Boltzmann’s constant). In the differential form (with variable T and ) equation (1) can be rewritten in a form given by equation (2):

dxdxEd )()( . (2)

where E and x are the alloy’s average internal energy (calculated with the use of CE expansion) and concentration of constituents; and averaging was performed according to formula:

))(exp(/))(exp( i

iii

iii xENxENAA

Using the differential form given by equation (2) the thermodynamic potential ),( can be calculated through the thermodynamic integration described by equation (3) (van de Walle & Asta, 2002):

),(

),(000111

11

00

),(),(),(),(

dxxE

(3)

The thermodynamic integration in (3), along a continuous path connecting points (0, 0) and (1, 1) which does not encounter the phase transition, was performed using the Monte Carlo method. The starting point (0, 0) is taken in the limit of low

temperature at chemical potential stabilizing a given ground state of a system (here, the chemical poten-tials of end-members CdS and CdSe).

A schematic diagram of the approach is: VASP calculations of formation enthalpies and vibrational free energies for a set of superstructures fit a clus-ter expansion (CE = set of effective cluster interac-tions, ECI) fit effective force constants to model excess vibrational contributions Monte Carlo thermodynamic integration phase diagram. The advantage of this approach is that it is based on the parameters-free ab initio calculations and leads to high quality effective Hamiltonians for multicompo-nent systems. The CE has the limitation that it only applies to a parent structure and its superstructures.

3. RESULTS AND DISCUSSION

Using the ab initio (VASP) method, calculations of the ground state energy were performed for the stoichiometric compounds CdSe and CdS and for the formation energies of many B1-, B3-, or B4-based superstructures (36 – B1, 36 – B3 and 34 – B4). All formation energies were positive, which implies that no intermediate ground state structures were predict-ed. The optimal number of superstructures were determined by minimizing the cross-validation score between ab initio computations and the cluster ex-pansion prediction. Figure 1 shows the dependence of the CVS on the number of calculated superstruc-tures. The convergence of CVS at values less than 1.5 meV/atom was reached for approximately 25 superstructures. Increasing further the number of superstructures results in the fluctuations of CVS with standard deviation of order of 0.1 meV/atom. The results presented in figure 1 strongly suggests convergence of the CE series.

Fig. 1. Dependence of the CVS vs number of superstructures used in the fit of CE for B1, B3 and B4 structures.

Page 154: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 348 –

The ECI, are plotted as functions of inter-atomic separations in figures 2. It is evident that with in-creasing distance, pair-ECI magnitudes decrease with oscillatory sign. The 3-body ECI, for B1 struc-ture-type, is very small. Low values of cross valida-tion score and decreasing magnitudes of the ECI justify truncation of CE series and discarding the larger clusters.

Fig. 2. Effective cluster interactions (ECI) as functions of inter-atomic distance (d/dnn) for the clusters taken into account in cluster expansion series for the B4 (figure a), B3 (figure b) and B1 (figure c) underlying crystal structures. The inter-atomic distance is expressed in units of the nearest neighbor distance (dnn).

Figures 3 are plots of the VASP-calculated supercell formation energies (Ef per cation) that were used to fit the ECI in figures 2. The differences between VASP-calculated and CE-calculated are small, except for the end-members compounds CdSe and CdS in B4 structure-type; note that these differ-ences are an order of magnitude smaller than the values Ef, thus the results in figures 3, confirm the quality and predictive power of CE.

Fig. 3. Formation energies Ef calculated by VASP (cross) and fitted by cluster expansion (CE) for B4 (figure a), B3 (figure b) and B1 (figure c) structures. Note the different scale used on vertical axis of figure c.

Note that formation energies for B1-based supercells are about twice as large as for the wurtzite and zinc-blende structures. This correlates with the higher predicted consolute temperature when only the configurational part of free energy is taken into account.

Figures 4 are the calculated phase diagrams for the CdSe-CdS system in B1, B3 and B4 crystal structures. Temperature independent ECI were used to calculate the lower solvii in (a) and (b) and the upper curve in (c) (dash lines). Temperature depend-ent ECI, which imply the inclusion of excess vibra-tional contributions to the free energy, yield the re-sults which are plotted as the upper solvii in (a) and (b) and the lower solvus in (c) (solid lines). In most miscibility gap systems (Adjaoud, et al., 2009; Bur-ton & van de Walle, 2006) the inclusion of tempera-ture-dependent ECI leads to a reduction in TC, thus the results in figures 3a and 3b, are atypical. Howev-er, detailed model studies of an effect of lattice

Page 155: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 349 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Fig. 4. Calculated phase diagram of CdSxSe1-x alloy in B4 (fig-ure a), B3 (figure b) and B1 (figure c). Dash and solid solvii are the phase diagrams that were calculated with temperature-independent-, and temperature-dependent ECI, respectively. The results in (a) and (b), that T-independent ECI predict lowest TC rather than higher, is atypical.

vibrations on the phase stabilities of substitutional alloys have shown (Garbulsky & Ceder, 1996), that inclusion of vibrations into phase-diagram modeling of miscibility gap systems can increase the consolute temperature. Furthermore, investigations of the vi-brational entropy change upon order-disorder transi-tion in Pd3V system (van de Walle & Ceder, 2000) have shown, that the relaxation of bonds can change the sign of vibrational entropy differences as com-pared to the expectations based on the bond propor-tion model. The main feature of the calculated phase diagrams is the consolute temperature (TC). For the B4 structure (figure 3a) the miscibility gap is sym-metric with critical point (xC, TC) = (0.50, 220 K) when only configurational degrees of freedom are taken into account, and (xC, TC) = (0.50, 270 K) when temperature dependent vibrational contribu-tion is included. For the B3 structure (figure 3b) the shape of the phase diagram does not change signifi-

cantly as compared to that of B4 structure, but for the B3 structure we obtained higher consolute tem-peratures: 230 K and 300 K, respectively. For the B1 structure (figure 4c) the phase diagram obtained on the basis of temperature independent ECI is almost symmetric (xC = 0.51). Inclusion of the vibrational contribution enhances asymmetry (xC = 0.61) and reduce the critical temperature: from TC = 360 K, when only formation energy is taken into account, to TC = 270 K, with vibrational effects included.

4. CONCLUSIONS

Ab initio calculations of CdSe-CdS phase dia-grams for wurtzite-, zinc-blende- and rock salt struc-ture-types were calculated by the CE-method; both without- and with excess vibrational free energy contributions (i.e. without- and with T-dependent ECI, respectively). Miscibility gaps are predicted for all three systems. When only configurational free energy is taken into account the calculated consolute temperatures are: 220 K, 230 K and 360 K for wurtzite-, zinc-blende- and rock salt structure-types, respectively. Surprisingly, the inclusion of excess vibrational contributions to the free energy destabi-lizes the B3- and B4-based solid solutions, contrary to similar studies (Burton et al., 2006), and increases the consolute temperature by 30% and 23% for wurtzite- and zinc-blende structure-types, respec-tively. For the rock-salt structure inclusion of vibra-tions reduces the consolute temperature by 25%, similarly as reported by Adjaoud, et al. (2009) for the TiC-ZrC system. Slightly above room tempera-ture complete solid solution is possible in the zinc-blende structure. Calculated consolute temperatures for B1, B3 and B4 structure-types compare well with experimental critical temperature TC = 298 K report-ed by Xu, et al., (2009).

REFERENCES

Adjaoud, O., Steinle-Neumann, G., Burton, B.P., van de Walle, A., 2009, First-principles phase diagram calculations for the HfC-TiC, ZrC-TiC, and HfC-ZrC solid solutions, Phys. Rev. B, 80, 134112-134119.

Banerjee, R., Jayakrishnan, R., Ayyub, P., 2000, Effect of the size-induced structural transformation on the band gap in CdS nanoparticles, J. Phys. Condens. Matter, 12, 10647-10654.

Blöchl, P. E., 1994, Projector augmented-wave method, Phys. Rev. B, 50, 17953-17979.

Burton, B.P., van de Walle, A., 2006, First principles phase diagram calculations for the system NaCl-KCl: The role of excess vibrational entropy, Chem. Geo., 225, 222-229.

Page 156: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 350 –

Burton, B.P., van de Walle, A., Kattner, U., 2006, First princi-ples phase diagram calculations for the wurtzite-structure systems AlN-GaN, GaN-InN, and AlN-InN, Journ. Appl. Phys., 100, 113528-113534.

Breidi, A., 2011, Temperature–pressure phase diagrams, structural and electronic properties of binary and pseudobinary semi-conductors: an ab initio study, PhD thesis, l’Université Pual Verlaine, Metz.

Davies, P.K., 1981, Thermodynamics of solid solution formation, PhD thesis, Arizona State University, Arizona.

Deligoz, E., Colakoglu, K., Ciftci, Y., 2006, Elastic, electronic, and lattice dynamical properties of CdS, CdSe, and CdTe, Physica B, 373, 124-130.

Garbulsky, G.D., Ceder, G., 1996, Contribution of the vibration-al free energy to phase stability in substitutional alloys: Methods and trends, Phys. Rev. B,53, 8993-9001.

Hotje, U., Rose, C., Binnewies, M., 2003, Lattice constants and molar volume in the system ZnS, ZnSe, CdS, CdSe, Sol. State Sci., 5, 1259-1262.

Jug, K., Tikhomirov, V.A., 2006, Anion substitution in zinc chalcogenides, J. Comput. Chem. 27, 1088-1092.

Kresse, G., Hafner, J., 1993, Ab initio molecular dynamics for liquid metals, Phys. Rev. B, 47, 558-561.

Kresse, G., Hafner, J., 1994, Ab initio molecular simulation of the liquid-metal–amorphous-semiconductor transition in germanium, Phys. Rev. B, 49, 14251-14269.

Kresse, G., Furthmüller, J., 1996a, Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set, Comput. Mater. Sci., 6, 15-50.

Kresse, G., Furthmüller, J., 1996b, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys. Rev. B, 54, 11169-11186.

Lukas, H. L., Fries, S. G., Sundman, B., 2007, Computational Thermodynamics. The Calphad Method, Cambridge Press, Cambridge.

Madelung, O., Schultz, M., Weiss, H., 1982, Landolt-Bornstein Numerical Data and Functional Relationships in Sci-ence and Technology, group III, 17b, Springer-Verlag, Berlin.

Mujica, A., Rubio, A., Munoz, A., Needs, R.J., 2003, High pressure phases of group IV, II-V and II-VI compounds, Rev. Mod. Phys., 75, 863-912.

Ouendadji, S., Ghemid, S., Meradji, H., El Haj Hassan, F., 2010, Density functional study of CdS1-xSex and CdS1-xTex alloys, Comput. Mater. Sci., 48, 206-211.

Sanchez, J.M., Ducastelle, F., Gratias, D., 1984, Generalized cluster description of multicomponent systems, Physica A, 128, 334-350.

Tolbert, S.H., Alivisatos, A.P., 1995, The wurtzite to rock salt transformation structural transformation in CdSe nano-crystals under high pressure, J. Chem. Phys., 102, 4642-4656.

van de Walle, A., Ceder, G., 2000, First-principles computation of the vibrational entropy of ordered and disordered Pd3V, Phys. Rev. B, 61, 5972-5978.

van de Walle, A., Ceder, G., 2002a, Automating first-principles phase diagram calculations, J. Phase Equilib., 23, 348-359.

van de Walle, A., Asta, M., Ceder, G., 2002, The alloy theoretic automated toolkit: A user guide, Calphad, 26, 539-553.

van de Walle, A., Asta, M., 2002, Self-driven lattice-model Monte Carlo simulations of alloy thermodynamic, Mod-elling Simul. Mater. Sci. Eng., 10, 521-538.

van de Walle, A., Ceder, G., 2002b, The effect of lattice vibra-tions on substitutional alloy thermodynamics, Rev. Mod. Phys., 74, 11-45.

Wei, S.-H., Zhang, S.B., 2000, Structure stability and carier localization in CdX(X=S, Se, Te) semiconductors, Phys. Rev. B, 62, 6944-6947.

Xu, F., Ma, X., Kauzlarich, S.M., Navrotsky, A., 2009, En-thalpies of formation of CdSxSe1-x solid solutions, J. Ma-ter. Res., 24, 1368-1374.

OBLICZENIA Z PIERWSZYCH ZASAD DIAGRAMÓW FAZOWYCH DLA PSEUDOBINARNEGO SYSTEMU

CdSe-CdS KRYSTALIZUJĄCEGO W SIECIACH WURCYTU, BLENDY CYNKOWEJ ORAZ

SOLI KAMIENNEJ

Streszczenie Półprzewodnikowe związki Cd(Se,S) oraz ich stopy charak-

teryzują się szeroką bezpośrednią przerwą energetyczną i dlate-go mogą być przydatne w urządzeniach optoelektronicznych, światłoczułych, detektorach promieniowania gamma, diodach elektroluminescencyjnych, laserach oraz ogniwach słonecznych. Ze względu na możliwość atrakcyjnych zastosowań półprze-wodnikowe stopy CdSe1-xSx są w ostatnich latach przedmiotem rozważań teoretycznych oraz intensywnych badań doświadczal-nych.

Związki Cd(Se,S) w warunkach normalnych krystalizują w heksagonalnej strukturze wurcytu (B4) oraz metastabilnej, ściennie centrowanej strukturze blendy cynkowej (B3). Pod wpływem wysokiego ciśnienia struktury B4 oraz B3 zmieniają swoją formę krystaliczną i przekształcają się w gęstszą, ściennie centrowaną strukturę soli kamiennej (B1).

Celem niniejszej pracy jest przeprowadzenie obliczeń dia-gramów fazowych oraz wyznaczenie krytycznej temperatury mieszalności dla stopów CdSe1-xSx krystalizujących w struktu-rach B4, B3 oraz B1.

Diagramy fazowe zostały wyznaczone na podstawie po-tencjałów termodynamicznych obliczonych metodą całkowania termodynamicznego Monte Carlo. Zrealizowany proces obli-czeń wskazuje na występowanie luk mieszalności w całym zakresie koncentracji CdSe1-xSx dla wszystkich rozpatrywanych sieci krystalicznych. Rezultaty uzyskane dla struktur sieci B4 oraz B3 charakteryzują się symetrycznymi lukami mieszalności. W przypadku struktur sieci B1 luka mieszalności wykazuje lekko asymetryczny charakter. Wyznaczone krytyczne tempera-tury mieszalności wynoszą 270 K, 300 K, 270 K odpowiednio dla struktur sieci B4, B3 i B1. W obliczeniach uwzględniono efekt drgań sieci, a uzyskane rezultaty wykazują dobrą zgodność z dostępnymi w literaturze danymi eksperymentalnymi.

Received: October 29, 2012 Received in a revised form: December 3, 2012

Accepted: December 8, 2012

Page 157: CMMS_nr_2_2013

351 – 356 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

PHASE DIAGRAM CALCULATIONS FOR THE ZnSe – BESE SYSTEM BY FIRST-PRINCIPLES BASED THERMODYNAMIC

MONTE CARLO INTEGRATION

ANDRZEJ WOŹNIAKOWSKI, JÓZEF DENISZCZYK*

Institute of Materials Science, University of Silesia, Bankowa 12, 40-007 Katowice, Poland *Corresponding author: [email protected]

Abstract

The T-x phase diagram of Zn1-xBexSe alloy is calculated by means of ab initio method supplemented with the lattice Ising-like model cluster expansion approach and the Monte Carlo thermodynamic computations. Presented results con-firm the high quality of mapping of disordered alloy onto the lattice Hamiltonian. The calculated phase diagram shows the asymmetric miscibility gap with the upper critical solution temperature equal to 860 K (1020 K) when the lattice vibra-tions are included (excluded) in the free energy of the system. We have proved that below the room temperature the mis-cibility of ZnSe and BeSe phases is possible only in the narrow range of concentration near the x = 0 and 1. At elevated temperatures the two phases are more capable to be mixed over the wider concentration range on the Zn-rich side of phase diagram. Key words: clamping, groove rolling, FEM

1. INTRODUCTION

The alloying of semiconductors with the aim of tuning the band gap energy to achieve the values expected for device’s application in recent past was the subject of extensive experimental and theoretical research (Berghout et al., 2007; and references cited therein). The investigations were focused especially on the wide-band gap II-IV semiconductors. The II-IV ZnSe compound is one of the promising materi-als for light emitting devices operating at short wavelengths (green and blue range). However the dislocations, point defects and their diffusion causes the degradation observed in the devices based on the ZnSe structures. Knowing that the BeSe compound is characterized by a strong covalent bonding (Vèrié, 1997), the alloying of ZnSe with BeSe was proposed to increase the resistance of the ZnSe structure to defect generation (Plazaola et al., 2003).

Recent measurements have confirmed that the concentration of vacancies in Zn1-xBexSe alloy de-creases with increasing concentration of Be atoms (Plazaola et al., 2003). However, Raman scattering study of lattice dynamics in Zn1-xBexSe alloy has proved that the atomic distribution in the alloy is not uniform but the ZnSe rich and BeSe rich regions form (Pagès et al., 2010). This findings indicate that the phase separation occurs in the samples investi-gated, what may indicate the presence of the immis-cibility region in the (T-x) phase diagram of the al-loy. Due to mismatch of unit cell volumes and dif-ferent elastic properties of ZnSe and BeSe constitu-ents the preparation of the Zn1-xBexSe alloy may demand special conditions. The knowledge of the T-x phase diagram might be very helpful in prepara-tion of Zn1-xBexSe alloy. The phase diagram of the Zn1-xBexSe alloy was the subject of theoretical ab initio study with the use of the common tangent

Page 158: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 352 –

method of phase diagram construction (Berghout et al., 2007 ). In the approach only formation enthalpy was taken into account, neglecting the lattice vibra-tion contribution to the free energy of the system. Furthermore, in the phase diagram calculations the formation enthalpy for only few concentrations was considered.

The present work aims to extend the study phase stability of Zn1-xBexSe system to cover both the con-figuration formation enthalpy and lattice vibrations free energy contributions to the total free energy of the system. With this aim the theoretical research was undertaken aimed to determine the T-x phase diagram of Zn1-xBexSe alloy by ab-initio method supplemented with the Monte Carlo (MC) phase diagram calculations within the semi-grand-canonical ensemble.

In the subsequent text the computational details are presented in Sec. 2. The results are presented and discussed in Sec. 3. Section 4 is the conclusion.

2. COMPUTATIONAL DETAILS

In the normal conditions the both end-member compounds, ZnSe and BeSe, crystallize in the zinc-blend (B3) type structure (Karzel et al., 1996; Luo et al., 1995). With this in mind, in our phase diagram calculations we assume the B3-type structure for Zn1-xBexSe alloy for the entire concentration range. Formation energies,

ZnSeBeSeSeBeZn ExxEEExx

11

were calculated up to 20 atoms per unit cell. Total energy calculations were done using the Vienna ab initio Simulation Package VASP (Kresse & Hafner, 1993, 1994; Kresse & Furthmüller, 1996a, 1996b) using ultrasoft Vanderbilt-type pseudopotentials (Vanderbilt, 1990) with the generalized gradient approximation (GGA) for exchange and correlation. Valence electron configurations for the pseudopo-tentials are Se = 4s24p4, Be = 2s2, Zn = 3d104s2. All calculations were converged with respect to gamma centered k-point sampling, and a plane-wave energy cutoff of 350 eV was used which yields values that are converged to within a few meV per atom. Electronic degrees of freedom were optimized with a conjugate gradient algorithm, which is recommend-ed for difficult relaxation problems. Each superstruc-ture was relaxed with respect to volume, supercell shape and atomic positions.

Based on the VASP results all remaining steps of the First Principles Phase Diagram (FPPD) calcu-lations were performed with the use of Alloy Theo-

retic Automated Toolkit (ATAT) software package (van de Walle & Ceder, 2002a; van de Walle et al., 2002; van de Walle & Asta, 2002). In the first step VASP calculations were used to construct cluster expansion (CE) Hamiltonian. The cluster expansion (Sanchez et al., 1984) is a method to parameterize the energy of a material as a function of its configu-ration. The energy, (per atom) is represented as a polynomial in the occupation variables given by equation (1):

,'

i

iJmE (1)

where α is a cluster defined as a set of lattice sites. The sum is taken over all clusters α that are not equivalent by a symmetry operation of the space group of the parent lattice, while the average is taken over all clusters α′ that are equivalent to α by sym-metry. The coefficients Jα of CE expansion (1) em-body the information regarding the energetics of the alloy and are called the Effective Cluster Interaction (ECI). The multiplicities mα indicate the number of clusters that are equivalent by symmetry to a divided by the number of lattice sites. Typical well-converged cluster expansion system contains of about 10-20 effective cluster interactions and re-quires the calculation of the energy of around 30-50 ordered structures (van de Walle et al., 2002; van der Ven et al., 1998; Garbulsky & Ceder, 1995; Ozolinš et al., 1998).

The predictive power of cluster expansion, de-fined by equation (1) is controlled by cross-validation score defined by equation (2)

21

1

2)( )ˆ(

n

iii EE

n1CV (2)

where Ei is an ab inito calculated energy of super-structure i, while )(

ˆiE represent the energy of super-

structure i obtained from CE (equation (1)) using the (n – 1) other structural energies.

The part of the free energy contributed by lattice vibrations was taken into account employing the coarse-graining formalism. For each superstructure the vibrational free energy was calculated within the quasi-harmonic approximation. To reduce the com-putational time needed for obtaining phonon densi-ties of states for a set of superstructures involved in the cluster expansion procedure, the bond-length-dependent transferable force constant approximation was used. Within the approximation the nearest-

Page 159: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 353 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

neighbor force constant matrix as function of bond lengths (volume) was calculated for the end-members SeZn and SeBe of Zn1-xBexSe series. For all remaining superstructures the force constant ma-trices were predicted using the relaxed bond lengths and the chemical identities of bonds in each super-structure and employing the bond stiffness versus bond length functions evaluated for end-member compounds. In this procedure, called transferable force constant approach (van de Walle & Ceder, 2002b), the force constant matrix is approximated by the diagonal form described by equation (3) (Liu et al., 2007)

sb

b

ji, , (3)

with only two independent terms: stretching stiffness s, and the isotropic bending stiffness b.

An alternative way to determine the vibrational free energy for each superstructure considered in the cluster expansion (1) is to apply the direct force method (Parlinski at al., 1997) or the linear response theory (Giannozzi et al., 1991), individually for each superstructure. Because of multiatomic composition and low crystal symmetry of superstructures both alternative approaches demand high computing power and are time consuming, and are not applica-ble in the phase diagram calculations.

The phase diagram calculations were performed with the use of the Monte Carlo (MC) thermody-namic integration within the semi-grand-ensemble (van de Walle & Asta, 2002). The Hamiltonian used in MC integration was of the cluster expansion form given by equation (1) with expansion parameters fitted to the formation enthalpy and vibrational free energy calculated by parallel computing.

3. RESULTS AND DISCUSSION

The ab initio calculations were performed for the end-member compounds ZnSe and BeSe and 33 reference superstructures containing up to 20 at-oms. All supercell energies are positive with re-spect to end-member compounds, no intermediate ground states was found, what indicates a miscibil-ity gap system. In the fitting procedure described in Sec. 2 the best cross-validation score (CV = 0.0051) was obtained for 14 clusters in the expansion (1). The cluster coordinates and corre-sponding effective cluster interactions are collected in table 1. It is evident, that the largest ECI intro-duce the zero and point clusters. Among multisite clusters the largest ECI belongs to nearest neighbor cation sites. With increasing distance between sites of cluster the values of pair ECI falls down in an oscillatory manner.

Figure 1 shows the formation energies E (per cation atom) calculated for all reference superstruc-

tures by ab initio (VASP) method and using the cluster expansion (1) using the ECI given in table 1. The calculated and fitted energies do not coincide precisely, but besides the end-member com-pounds the residuals are at least one order of magnitude smaller than the values of E itself. This result confirms the quality of computational methodology and correct predictive power of CE expansion.

The bending and stretching bond stiffness versus bond length relationships for Se-Zn and Se-Be nearest neighbor bonds are pre-sented in figure 2. The bond stiff-ness (b and s) constants calculated base on the force constants deter-mined ab initio within the small displacement approach the for

Table 1. Cluster coordinates and corresponding effective cluster interactions of clusters taken into account in cluster expansion series defined by equation (1).

Index Cluster coordinates

di,j(a),

Å ECI,

eV/cation Index Cluster coordinates

di,j(a),

Å ECI,

eV/cation

1 Zero cluster 0.041703 8 (1,1,1)

(0, 0, 2) 6.944 -0.001199

2 Point cluster 0.012845 9 (1,1,1)

(-1/2, 0, 3/2) 7.501 -0.000661

3 (1, 1, 1)

(1/2, 1, 3/2) 2.835 0.006974 10

(1,1,1) (-1, 1, 1)

8.019 -0.001201

4 (1,1,1)

(0, 1, 1) 4.009 -0.003786 11

(1,1,1) (-1, 1/2, 3/2)

8.505 -0.001169

5 (1,1,1)

(0, 1/2, 3/2) 4.910 -0.002446 12

(1,1,1) (-1/2, -1/2, 1)

8.505 0.000852

6 (1,1,1)

(0, 0, 1) 5.670 0.003586 13

(1,1,1) (-1, 0, 1)

8.965 -0.000586

7 (1,1,1)

(-1/2, ½, 1) 6.339 -0.001999 14

(1,1,1) (1/2, 1, 3/2) (1/2, 1/2, 1)

2.835 -0.002262

(a) di,j is the distance of the longest pair within the cluster

Page 160: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 354 –

a sequence of volumes of end-member compounds show the linear dependence on bond length.

Fig. 1. Formation energies E calculated by VASP (cross) and fitted by cluster expansion

Fig. 3. Calculated phase diagram based on the ECI fitted to the configuration formation enthalpy (dash line) and on the temper-ature-dependent ECI fitted to the sum of configuration and vibration free energy (solid line). The arrows show the upper critical solution points.

Figure 3 shows the phase diagram of Zn1-xBexSe alloy calculated on the base of temperature inde-pendent ECI (fitted to formation enthalpies) and temperature dependent ECI (fitted to the sum of formation enthalpies and vibration free energies).

The main feature of the calculated phase dia-gram is the presence of asymmetric miscibility gap with upper critical solution temperature (TC). With only the configurational degrees of freedom taken into account our modeling yields the critical point (xc, TC) = (0.69, 1020 K). The shape of our phase diagram differs noticeably from that calculated by Berghout et al. (2007), where no asymmetry can be observed. The critical temperature obtained within the restricted (to formation enthalpies) approach (TC = 1020 K) is significantly lower than the one obtained by Berghout et al. (2007) (TC = 1324 K). Also the critical concentration (xc = 0.54) found by

Berghout et al. (2007) is much lower then our result. The higher critical concentra-tion resulted from our calculations can be attributed to an asymmetry of the phase diagram (figure 3).

The contribution to free energy of vi-brational degrees of freedom, when taken into account, modifies considerably the phase diagram (figure 3). The main effect of lattice vibrations is the shift of critical point to lower temperature (TC = 860 K). The critical concentration remains almost unchanged (xC = 0.65). An effect of vi-brations on the shape of phase diagram is negligible although the recess visible on the phase diagram resulted from for-mation enthalpies solely is reduced

slightly. The asymmetric shape of calculated phase dia-

gram might suggest the presence on the Zn-rich left-hand side of concentration range of some additional structure with the ground-state configuration energy lower than that of end-member compounds. Howev-er our detailed search for the structures with nega-tive ground-state formation enthalpy failed.

The asymmetry of the phase diagram, indicating the enhanced miscibility of cation atoms on the Zn-rich side of phase diagram, can be related to the mismatch of ionic radii of cation ions (Zn – 74 pm and Be – 41 pm). It is well known, that the miscibil-ity gap between end--member compounds with ions with very different ionic radius are asymmetric with reduced solubility on the side of the diagram corre-sponding to the smaller ion (Burton et al., 2006).

Fig. 2. Nearest neighbor stretching s (figures a, c) and bending b (figures b, d) force constants versus bond length for Se-Zn (left panel) and Be-Se (right panel) bonds. Crosses indicate an ab initio data points and lines represent linear fits used in the calculations of the vibrational free energy.

Page 161: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 355 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

CONCLUSIONS

Ab initio based Monte Carlo modeling of x-T phase diagram of Zn1-xBexSe alloy has shown that in the normal condition the solubility of Zn and Be cations is possible only on the Zn-rich side of phase diagram. At elevated temperatures the two phases, ZnSe and BeSe, are more capable to be mixed over the wider concentration range on the Zn-rich side of phase diagram. The upper critical solution tempera-ture of 860 K is predicted at concentration x = 0.65 of Be, above which the solubility of cations is possi-ble over the all concentration range.

REFERENCES

Berghout, A., Zaoui, A., Hugel, J., Ferhat, M., 2007, First-principles study of the energy-gap composition depend-ence of Zn1-xBexSe ternary alloys, Phys. Rev. B, 75, 205112-205121.

Burton, B.P., van de Walle, A., Kattner, U., 2006, First princi-ples phase diagram calculations for the wurtzite-structure systems AlN-GaN, GaN-InN, and AlN-InN, Journ. Appl. Phys., 100, 113528-113534.

Garbulsky, G.D., Ceder, G., 1995, Linear-programming method for obtaining effective cluster interactions in alloys from total-energy calculations: Application to the fcc Pd-V system, Phys. Rev. B, 51, 67-72.

Giannozzi, P., de Gironcoli, S., Pavone, P., Baroni, S., 1991, Ab initio calculation of phonon dispersions in semiconduc-tors, Phys. Rev. B, 43, 7231-7242.

Karzel, H., Potzel, W., Köfferlein, M., Schiessl, W., Steiner, M., Hiller, U., Kalvius, G.M., Mitchell, D.W., Das, T.P., Blaha, P., Schwarz, K., Pasternak, M.P., 1996, Lattice dynamics and hyperfine interactions in ZnO and ZnSe at high external pressures, Phys. Rev. B, 53, 11425-11438.

Kresse, G., Hafner, J., 1993, Ab initio molecular dynamics for liquid metals, Phys. Rev., B 47, 558-561.

Kresse, G., Hafner, J., 1994, Ab initio molecular simulation of the liquid-metal–amorphous-semiconductor transition in germanium, Phys. Rev. B, 49, 14251-14269.

Kresse, G., Furthmüller, J., 1996a, Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set, Comput. Mater. Sci., 6, 15-50.

Kresse, G., Furthmüller, J., 1996b, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys. Rev. B, 54, 11169-11186.

Liu, J.Z., Gosh, G., van de Walle, A., Asta, M., 2007, Transfer-able force-constant modeling of vibrational thermody-namic properties in fcc-baased Al-TM (TM = Ti, Zr, Hf) alloys, Phys. Rev. B, 75, 104117-15.

Luo, H., Ghandehari, K., Greene, R.G., Ruoff, A.L., Trail, S.S., DiSalvo, F.J., 1995, Phase transformation of BeSe and BeTe to the NiAs structure at high pressure, Phys. Rev. B, 52, 7058-7064.

Ozolinš, V., Wolverton, C., Zunger, A., 1998, Cu-Au, Ag-Au, Cu-Ag and Ni-Au intermetallics: First-principles study of temperature-composition phase diagram and struc-tures, Phys. Rev. B, 57, 6427-64443.

Pagès, O., Postnikov, A.V., Chafi, A., Bormann, D., Simon, P., Glas, F., Firszt, F., Paszkowicz, W., Tournie, E., 2010,

Non-random Be-to-Zn substitution in ZnBeSe alloys: Raman scattering and ab initio calculations, Eur. Phys. J. B, 73, 461-469.

Parlinski, K., Li Z.Q., Kawazoe, Y., 1997, First-principle deter-mination of the soft mode in cubic ZrO_2, Phys.Rev.Lett., 78, 4063-4066

Plazaola, F., Flyktman, J., Saarinen, K., Dobrzynski, L., Firszt, F., Legowski, S., Meczynska, H., Paszkowicz, W., Reniewicz, H., 2003, Defect characterization of ZnBeSe solid solutions by means of positron annihilation and photoluminescence techniques, J. Appl. Phys., 94, 1647-1653.

Sanchez, J.M., Ducastelle, F., Gratias, D., 1984, Generalized cluster description of multicomponent systems, Physica A, 128, 334-350.

Vanderbilt, D., 1990, Soft-selfconsistent pseudopotentials in a generalized eigenvalue formalism, Phys. Rev. B, 41, 7892-7895.

van der Ven, A., Aydinol, M.K., Ceder, G., Kresse, G., Hafner, J., 1998, First-principles investigation of phase stability in LixCoO2, Phys. Rev. B, 58, 2975-2987.

van de Walle, A., Ceder, G., 2002a, Automating first-principles phase diagram calculations, J. Phase Equilib., 23, 348-359.

van de Walle, A., Asta, M., Ceder, G., 2002, The alloy theoretic automated toolkit: A user guide, Calphad, 26, 539-553.

van de Walle, A., Asta, M., 2002, Self-driven lattice-model Monte Carlo simulations of alloy thermodynamic, Mod-elling Simul. Mater. Sci. Eng., 10, 521-538.

van de Walle, A., Ceder, G., 2002b, The effect of lattice vibra-tions on substitutional alloy thermodynamics, Rev. Mod. Phys., 74, 11-45.

Vèrié, C., 1997, Expected pronounced strengthening of II-VI lattices with beryllium chalcogenides, Mater. Sci. Eng. B, 43, 60-64.

OBLICZENIA Z PIERWSZYCH ZASAD DIAGRAMU FAZOWEGO UKŁADU ZnSe–BESE METODĄ

CAŁKOWANIA TERMODYNAMICZNEGO MONTE CARLO

Streszczenie W pracy przedstawiono rezultaty badań teoretycznych sta-

bilności fazowej roztworu stałego Zn1-xBexSe w zależności od temperatury i koncentracji składników. Diagram fazowy T-x wyznaczono na podstawie potencjału termodynamicznego wyli-czonego metodą całkowania termodynamicznego Monte Carlo w ramach wielkiego rozkładu kanonicznego. W obliczeniach termodynamicznych wykorzystano Hamiltonian sieciowy w postaci tzw. rozwinięcia klastrowego (typu modelu Isinga). Współczynniki energetyczne rozwinięcia klastrowego wyzna-czono drogą dopasowania Hamiltonianu sieciowego do wartości entalpii tworzenia wyliczonych kwantowymi metodami z pierw-szych zasad dla 33 nadstruktur Zn1-xBexSe w całym zakresie koncentracji. W obliczeniach termodynamicznych uwzględnio-no również wkład do energii swobodnej pochodzący od drgań sieci, który wyznaczono dla 33 nadstruktur w ramach przybliże-nia kwazi-harmonicznego.

Uzyskany z obliczeń diagram fazowy T-x charakteryzuje się asymetryczną luką mieszalności. Obliczenia termodynamiczne oparte wyłącznie na części konfiguracyjnej energii swobodnej

Page 162: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 356 –

dają diagram fazowy z górnym punktem krytycznym: xC = 0,69 i TC = 1020 K. Uwzględnienie drgań sieci daje obniżoną tempe-raturę krytyczną TC = 860 K (xC = 0,65). Prezentowane badania wykazały, że poniżej temperatury pokojowej mieszalność faz ZnSe i BeSe jest możliwa wyłącznie w wąskich zakresach kon-centracji (x 0 oraz x 1). W podwyższonej temperaturze, 400 K < T < TC, mieszalność faz ZnSe i BeSe jest możliwa również w roztworach Zn1-xBexSe o wzbogaconej zawartości cynku.

Received: September 20, 2012 Received in a revised form: November 19, 2012

Accepted: November 30, 2012

Page 163: CMMS_nr_2_2013

357 – 363 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

MODELLING MICROSTRUCTURE EVOLUTION DURING EQUAL

CHANNEL ANGULAR PRESSING OF MAGNESIUM ALLOYS USING CELLULAR AUTOMATA FINITE ELEMENT METHOD

MICHAL GZYL1, *, ANDRZEJ ROSOCHOWSKI1, ANDRZEJ MILENIN2, LECH OLEJNIK3

1 University of Strathclyde, 75 Montrose Street, Glasgow G1 1XJ, United Kingdom

2 AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Kraków, Poland 3 Warsaw University of Technology, ul. Narbutta 85, 02-524 Warszawa, Poland

*Corresponding author: [email protected]

Abstract

Equal channel angular pressing (ECAP) is one of the most popular methods of obtaining ultrafine grained (UFG) metals. However, only relatively short billets can be processed by ECAP due to force limitation. A solution to this prob-lem could be recently developed incremental variant of the process, so called I-ECAP. Since I-ECAP can deal with con-tinuous billets, it can be widely used in industrial practice. Recently, many researchers have put an effort to obtain UFG magnesium alloys which, due to their low density, are very promising materials for weight and energy saving applica-tions. It was reported that microstructure refinement during ECAP is controlled by dynamic recrystallization and the final mean grain size is dependent mainly on processing temperature.

In this work, cellular automata finite element (CAFE) method was used to investigate microstructure evolution during four passes of ECAP and its incremental variant I-ECAP. The cellular automata space dynamics is determined by transi-tion rules, whose parameters are strain, strain rate and temperature obtained from FE simulation. An internal state varia-ble model describes total dislocation density evolution and transfers this information to the CA space. The developed CAFE model calculates the mean grain size and generates a digital microstructure prediction after processing, which could be useful to estimate mechanical properties of the produced UFG metal.

Fitting and verification of the model was done using the experimental results obtained from I-ECAP of an AZ31B magnesium alloy and the data derived from literature. The CAFE simulation results were verified for the temperature range 200-250 °C and strain rate 0.01-0.5 s-1; good agreement with experimental data was achieved. Key words: severe plastic deformation, equal channel angular pressing, ultrafine grained metals, magnesium alloys, cellu-lar automata finite element method

1. INTRODUCTION

1.1. Cellular automata finite element (CAFE) method

Cellular automata (CA) technique is used in ma-

terial science to provide digital representation of material microstructure and simulate its evolution during processing (Das, 2010; Madej et al., 2006;

Svyetlichnyy, 2012). Material is represented as a lattice of finite cells, called the cellular automata space. The interactions between cells describe dy-namics of a simulated physical phenomenon. Math-ematical description of interactions is introduced by transition functions (transition rules). The current state of each cell is determined by the state of its neighbours and its own state in the previous step. Cellular automata finite element (CAFE) approach is

Page 164: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 358 –

the example of a multi scale modelling approach. This is the combination of micro scale modelling using cellular automata and finite element (FE) analysis, the most popular method of metal forming simulations. The advantages of using multi scale simulation approach are evident, especially when ultra fine grained materials are considered. By using the CAFE method not only the mean grain size but also microstructure homogeneity and grain size dis-tribution can be calculated.

1.2. Equal channel angular pressing and its

incremental variant In order to improve mechanical properties of

magnesium and other metals many thermo-mechanical processes have been proposed. Severe plastic deformation (SPD) processes, in which a very large strain is imposed to the material to re-fine its grain microstructure and improve its strength, are very promising. Equal channel angular pressing (ECAP) was developed by Segal (1995). In the ECAP process, a billet is pushed through a die with two channels, which have the same cross sec-tion (circular or rectangular) and intersect at an an-gle that usually varies from 90° to 135°. Plastic strain is introduced into metal by simple shear, which occurs at the channel intersection. Since the billet dimensions remain unchanged, the process can be repeated in order to accumulate a desired high strain.

Due to force limitation only relatively small amount of material can be processed at each stage. That is the reason why ECAP is not widely used in industrial practice. Recently, incremental ECAP (I-ECAP) has been developed by Rosochowski and Olejnik (2007). In I-ECAP, the stages of material feeding and plastic deformation are separated, which reduces the feeding force dramatically. The tool configuration consists of a punch working in a reciprocating manner and a die leading and feed-ing the material in consecutive steps. When the punch moves away from the die, the billet is fed by a small increment. Then, when feeding stops and the billet is in the fixed position, the punch ap-proaches and plastically deforms it. The mode of deformation is simple shear, the same as in classi-cal ECAP.

1.3. Microstructure evolution during ECAP of Mg alloys

Figueiredo and Langdon (2010) presented

a model, which states that the mechanism of grain refinement during ECAP processing of AZ31 mag-nesium alloy is dynamic recrystallization (DRX). This hypothesis is based on the occurrence of a bi-modal microstructure after ECAP processing. They also introduced a ‘critical grain size’ term in order to explain that homogenous grain size distribution is possible to achieve only if the initial mean grain size is small enough. Otherwise, the newly formed re-crystallized grains are not able to fully consume the initial coarse grains. This observation was made earlier by other researchers (Janeček et al., 2007; Lapovok et al., 2008; Ding et al., 2009). Therefore, cellular automata transition rules usually used to describe DRX during hot forming of metals are ap-plied in the presented model.

2. MODEL DESCRIPTION

2.1. Overview In the present work, microstructure evolution

during ECAP and its incremental variant is modelled using cellular automata finite element (CAFE) tech-nique. The cellular automata (CA) space plays a role of digital material representation in meso scale; arti-ficial grains are represented by CA cells. Micro-structure evolution is described by transition func-tions, whose parameters are the macro scale integra-tion point variables obtained from FE simulation: strain, strain rate and temperature (mapped from FE nodes). The internal state variable method (Pietrzyk, 2002), which treats dislocation density as a material variable, is used to describe changes that occur in the micro scale during plastic deformation. Disloca-tion density in each cell controls the nucleation pro-cess. Random pentagonal neighbourhood and ab-sorbing boundary conditions are used to define CA space and its dynamics.

2.2. Dislocation density evolution

Evolution of dislocation density during hot metal

forming is controlled by two competing processes: strain hardening and thermal softening caused by recovery or recrystallization. Increase in dislocation density is caused by storage of dislocations while decrease in dislocation density results from annihila-tion of dislocations. The change of dislocation densi-

Page 165: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 359 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

ty due to strain increment is described by the equa-tion (1) (Sellars and Zhu, 2000):

CAi

CAi

CAi

CAi

CAi

CAi

dRTQ

AdRTQ

bMA

ddd

212

11 expexp

(1)

where: M – Taylor factor, b – Burgers vector, dε– effective strain increment, Q1 – effective activation energy, R – gas constant, T – temperature, – ef-fective strain rate, CA

i 1 – dislocation density at pre-vious strain increment, Q2 – apparent energy, A1, A2 – fitting parameters.

The presented approach is similar to the model developed by Mecking and Kocks (1981) where dislocation density evolution is also introduced as competition between dislocations storage and anni-hilation. Decrease in dislocation density is depend-ent on temperature as dislocation annihilation is characterized as thermally activated process (Madej et al., 2006).

Parameters of equations (1) were derived from literature (table 1). Fitting coefficients were deter-mined using Hooke-Jeeves optimization method. Dislocation density dependence on strain at 200°C and strain rate equal to 0.01 s-1 based on these pa-rameters, together with experimental results ob-tained for pure magnesium in similar conditions (Klimanek & Poetzsch, 2002; Mathis et al., 2004), are illustrated in figure 1.

Table 1. Parameters of dislocation density evolution equation (1).

Fig. 1. Dislocation density evolution at 200 °C and strain rate 0.01 s-1.

2.3. Cellular automata space evolution Microstructure evolution during processing is

modelled using transition rules, which describe how each CA cell state changes depending on its own and its neighbour state after the previous time increment. The CA space dynamics can be described by the following rules: cell becomes the site of nucleation when its dis-

location density exceeds a critical value; privi-leged areas for the dislocation density rise are grain boundaries and their vicinity (Galiyev et al., 2003);

cell changes its state to recrystallized when one of its neighbours is recrystallized; grain grows until its virtual energy related to its grow poten-tial is greater than zero; new grains cannot be consumed by other recrystallized grains. A critical value of dislocation density in a CA

cell must be reached to create a new grain nucleus in this cell. The nucleation process will be favoured where the level of stored energy is higher comparing to other areas. Following these arguments, the equa-tion (2) is used to calculate critical dislocation densi-ty. This is a simplified form of the formula intro-duced by Roberts and Ahlblom (1978):

3/11

2 pcrit Tp (2)

where: γ – grain boundary energy, p1, p2 – fitting parameters (table 2).

Grain boundary energy is dependent on the mis-orientation angle between neighbouring grains and is evaluated from the equation derived by Read and Shockley (1950) for low angle grain boundaries; for high angle grain boundaries it is kept constant. Mis-orientation between neighbouring grains is calculat-ed using a method presented by Zhu et al. (2000).

Orientation of each grain is described by three Euler angles: φ1, Φ, φ2 (Bunge notation). Wang and Huang (2003) showed the relation between crystal-lographic orientation and texture component in hcp metals. Since the slip on basal plane is the most fa-vourable deformation mechanism for magnesium, dislocation accumulation is more probable for grains with orientation closer to basal than prismatic or

A1 A2 Q1, kJ/mol

(Barnett, 2003) Q2, kJ/mol (Das, 2010)

M, - (Chino, 2006)

b, nm (Chino, 2006)

3.85e3 30 147 21 2.38 0.32

Page 166: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 360 –

pyramidal one. Processing at elevated temperatures is being simulated, therefore twinning is not taken into account. Moreover, twinning was not revealed in microscopic observations.

At each time step, the total dislocation density increment is divided by a number of randomly cho-sen cells N, which extrinsic dislocation density will be increased. The parameter is dependent on tem-perature and strain rate obtained from FE simulation. Since grain boundaries are prevailed areas for nucle-ation, more new nuclei are expected to occur when there are more grains in the CA space. Evolution of N is given by equation (3):

)exp( 6354 TpnpN p

dp (3)

where: N – number of cells with increased extrinsic dislocation density, nd – number of grains in CA space, p3, p4, p5, p6 – fitting parameters.

Grain growth rate is associated with virtual en-ergy assigned to each new nucleus. Since mean DRXed grain size is dependent mostly on tempera-ture, grain growth energy is the function of tempera-ture and misorientation angle. As the new grain grows its energy is being lowered. The process is stopped when the grain growth energy is equal to zero. In this condition, the grain has no potential for further expansion. The grain growth energy is intro-duced using an empirical equation (4):

7.0/10)exp( 87 TppE graingg

(4)

where: T – temperature, – misorientation angle, p7, p8 – fitting parameters.

Table 2. Parameters of CA space evolution equations (2-4).

3. EXPERIMENTAL PROCEDURE

Commercially extruded AZ31B magnesium rods with 17 mm diameter were machined using the EDM cutting technique in order to obtain bars with square cross-section 10 x 10 mm2 and length equal to 100 mm. A double-billet variant of I-ECAP was realised using a 1 MN hydraulic servo press (Roso-chowski et al., 2008). The billets were fed using a screw jack whose action was synchronised with the reciprocating movement of the punch. The punch movement followed an externally generated sine

waveform with frequency 0.5 Hz and amplitude equal to 1.8 mm. A motor driven screw jack was controlled using National Instruments hardware and software (LabVIEW). The feeding stroke was equal to 0.2 mm.

A die with 90° angle between channels was used to conduct four passes of I-ECAP, which resulted in the total strain of 4.6. Route BC was used, which meant that after each pass the billet was rotated by 90°. AZ31B billets and dies were heated up to 250°C using electric heaters. Temperature during processing was kept constant within ±2 °C, based on the readings obtained from a thermocouple located near deformation zone.

4. RESULTS

FE simulations were run using Abaqus/Explicit commercial software. CA microstructure evolution calculations and visualisation were performed using a self-developed software. CA space dimensions were 400 x 400 cells, it corresponded to 100 x 100 µm2 area of real material. The same conditions as during I-ECAP experiment were used in simula-tion. In order to investigate temperature and strain rate effect on the final mean grain size and micro-structure homogeneity, additional experimental re-sults obtained for different process parameters were derived from literature (table 3). Simulations were performed for temperatures: 200, 225 and 250°C and strain rate within range 0.01-0.5 s-1.

The initial microstructure of I-ECAP processed material was heterogeneous; coarse grains were sur-rounded by smaller ones (figure 2a), what could be attributed to DRX during hot extrusion of supplied

rods. The corresponding digital representation of as-received material was obtained by uniform grain growth and simulation of DRX (figure 2b). The mean grain size obtained after I-ECAP processing was equal to 6.15 µm (figure 2c) what is similar to the results obtained by Suwas et al. (2007). Strain rate values during both processes were significantly different, which could indicate that processing tem-perature has a dominant effect on the final grain size. The simulated microstructure was similar to the real one and the predicted grain size was 6.3 µm (figure 2d).

p1 p2 p3 p4 p5 p6 p7 p8

4.0756e70 -31.4176 0.6677e18 0.004261 1.06699 -0.092 1.1463 0.094

Page 167: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 361 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Table 3. Simulations parameters.

Initial temperature, °C

Strain rate, s-1

Channel angle, °

I-ECAP (this work) 250 0.5 90

Ding et al. (2009) 225 0.3 120

Ding et al. (2009) 200 0.3 120

Jin et al. (2006) 225 0.01 90

a) b)

c) d) Fig. 2. Microstructure images of as-received material (a) and after 4th pass of I-ECAP at 250 °C (c); corresponding simula-tion results in (b) and (d), respectively.

Ding et al. (2009) conducted ECAP of AZ31 magnesium alloy at 200°C and 225°C using a die with 120° channel angle. The initial microstructure and its digital material representation are shown in figure 3. The initial mean grain size was 7.2 µm, the same as the average grain size of CA representation. The microstructure obtained after 4 passes at 200°C and the corresponding results of modelling micro-structure evolution are shown in figure 3 as well. After 4 passes at 200°C the grain size was reduced to 1.8 µm while the model predicted 1.7 µm. When processing at 225°C, the microstructure was less homogenous than at 200°C. The mean grain size calculated using developed model was equal to 2.2 µm, while experimental result was 2.4 µm.

The initial mean grain size of the material pro-cessed by Jin et al. (2006) was equal to 15.6 µm; coarse grains and few smaller grains were observed (figure 4a). A digital material representation was generated using non-uniform grain growth: 80% of grains grew slower than others and the mean grain size was 15.8 µm. After first pass at 225°C, coarse grains were surrounded by colonies of very small

grains (figure 4c). The interior of coarse grains was not consumed during DRX process. The mean grain size after first pass was measured to be 4.1 µm while the model predicted 4.25 µm.

a) b)

c) d) Fig. 3. Microstructure images of as-received material (a) and after 4th pass of ECAP at 200 °C (c) obtained by Ding et al. (2009); corresponding simulation results in (b) and (d), respec-tively.

a) b)

c) d) Fig. 4. Microstructure images of as-received material (a) and after 1st pass of ECAP at 225 °C (c) obtained by Jin et al. (2006); corresponding simulation results in (b) and (d), respec-tively.

Results obtained from the developed CAFE model are in good agreement with experimental data (figure 5). The mean grain size after first pass de-pends strongly on the initial microstructure. A sig-nificant grain refinement is observed after first pass but it is not sufficient to refine the coarse grain dom-inated microstructure. Only grains smaller than ~15 µm can be fully recrystallized during first pass of ECAP at 200°C. As a result, heterogeneous grain size distribution is obtained. Further deformation is needed to refine and homogenize a microstructure. Grain refinement is limited by a temperature, it is

Page 168: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 362 –

shown that the mean grain size as small as 2 μm can be obtained at 200°C. Although smaller grain size cannot be obtained at given temperature, further processing leads to microstructure homogenization.

Fig. 5. Mean grain size obtained from experiments and CAFE simulations.

5. CONCLUSIONS

A multi scale CAFE approach was developed in order to model microstructure evolution during equal channel angular pressing. Computations were performed for various temperatures and strain rates that are typical for processing of magnesium alloys. Numerical results were verified using experimental data from conventional and incremental ECAP. Former were derived from literature, latter was ob-tained from I-ECAP experiment. The model correct-ly predicted both the mean grain size after subse-quent passes of ECAP/I-ECAP and microstructure homogeneity. In particular, a heterogeneous grain size distribution after first pass of ECAP for initially coarse grained microstructure was predicted as well as its further homogenization. Future work will be focused on modelling grain reorientation due to de-formation by simple shear.

Acknowledgements. Financial support from

Carpenter Technology Corporation is kindly acknowledged.

REFERENCES

Barnett, M.R., 2003, Recrystallization during and following hot working of magnesium alloy AZ31, Materials Science Forum, 419-422, 503-508.

Chino, Y., Hoshika, T., Lee, J.-S., 2006, Mechanical properties of AZ31 Mg alloy recycled by severe deformation, J. Mater. Res., 21, 754-760.

Das, S., 2010, Modeling mixed microstructures using a multi-level cellular automata finite element framework, Com-putational Materials Science, 47, 705-711.

Ding, S.X., Chang, C.P., Kao, P.W., 2009, Effects of Processing Parameters on the Grain Refinement of Magnesium Al-loy by Equal-Channel Angular Extrusion, Metallurgical and Materials Transactions A, 40A, 415-425.

Figueiredo, R.B., Langdon, T.G., 2010, Grain refinement and mechanical behavior of a magnesium alloy processed by ECAP. Journal of Materials Science, 45, 4827–4836.

Galiyev, A., Kaibyshev, R., Sakai, T., 2003, Continuous Dy-namic Recrystallization in Magnesium Alloy, Materials Science Forum, 419-422, 509-514.

Janeček, M., Popov, M., Krieger, M.G., Hellmig, R.J., Estrin, Y., 2007, Mechanical properties and microstructure of a Mg alloy AZ31 prepared by equal-channel angular pressing. Materials Science and Engineering A, 462, 116–120.

Jin, L., Lin, D., Mao, D., Zeng, X., Chen, B., Ding, W., 2006, Microstructure evolution of AZ31 Mg alloy during equal channel angular extrusion, Materials Science and Engi-neering A, 423, 247-252.

Klimanek, P., Poetzsch, A., 2002, Microstructure evolution under compressive plastic deformation of magnesium at different temperatures and strain rates, Materials Sci-ence and Engineering A, 324, 145-150.

Lapovok, R., Estrin, Y., Popov, M.V., Langdon, T.G., 2008, Enhanced Superplasticity in a Magnesium Alloy Pro-cessed by Equal-Channel Angular Pressing with a Back-Pressure. Advance Engineering Materials, 10, 429-433.

Madej, L., Hodgson, P.D., Pietrzyk, M., 2006, Development of the Multi-scale Analysis Model to Simulate Strain Lo-calization Occurring During Material Processing, Arch Comput Methods Eng, 16, 287–318.

Mathis, K., Nyilas, K., Axt, A., Dragomir-Cernatescu, I., Ungar, T., Lukac, P., 2004, The evolution of non-basal disloca-tions as a function of deformation temperature in pure magnesium determined by X-ray diffraction, Acta Mate-rialia, 52, 2889–2894.

Mecking, H., Kocks, U.F., 1981, Kinetics of flow and strain-hardening, Acta Metallurgica, 29, 1865-1875.

Pietrzyk, M., 2002, Through-process modeling of microstructure evolution in hot forming of steels, Journal of Materials Processing Technology,125-126, 53-62.

Read, W. T., Shockley, W., 1950, Dislocation Models of Crystal Grain Boundaries, Phys. Rev. 78, 275–289.

Roberts, W., Ahlblom, B., 1978, A nucleation criterion for dynamic recrystallization during hot working, Acta Met-allurgica, 26, 801-813.

Rosochowski, A., Olejnik, L., 2007, FEM simulation of incre-mental shear., Proc. Conf. Esaform 2007, eds, Cueto, E., Chinesta, F., Zaragoza, Spain, 653-658.

Rosochowski, A., Olejnik, L., Richert, M., 2008, Double-Billet Incremental ECAP, Materials Science Forum, 584-586, 139-144.

Segal, V.M., 1995, Materials processing by simple shear, Mate-rials Science and Engineering A, 197, 157-164.

Sellars, C.M., Zhu, Q., 2000, Microstructural modelling of aluminium alloys during thermomechanical processing, Materials Science and Engineering A, 280, 1-7.

Suwas, S., Gottstein, G., Kumar, R., 2007, Evolution of crystal-lographic texture during equal channel angular extrusion (ECAE) and its effects on secondary processing of mag-nesium, Materials Science and Engineering A, 471, 1–14.

Svyetlichnyy, D. S., 2012, Reorganization of cellular space during the modeling of the microstructure evolution by

Page 169: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 363 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

frontal cellular automata, Computational Materials Sci-ence, 60, 153–162.

Wang, Y. N., Huang, J.C., 2003, Texture analysis in hexagonal materials, Materials Chemistry and Physics, 81, 11–26.

Zhu, G., Mao, W., Yu, Y., 2000, Calculation of misorientation distribution between recrystallized grains and deformed matrix, Scripta materialia, 42, 37–41.

MODELOWANIE ROZWOJU MIKROSTRUKTURY PODCZAS RÓWNOKANAŁOWEGO WYCISKANIA KĄTOWEGO STOPÓW MAGNEZU PRZY UŻYCIU

METODY CAFE

Streszczenie Równokanałowe wyciskanie kątowe (equal channel angular

pressing – ECAP) jest jedną z najpopularniejszych metod ot-rzymywania ultra drobnoziarnistych metali. Jednak z powodu dużych sił potrzebnych do przeprowadzenia procesu, tylko relatywnie krótkie wstępniaki mogą być wyciskane. Rozwiąza-niem problemu może być opracowany inkrementalny wariant tego procesu, tzw. I-ECAP. Ze względu na to, że przy użyciu I-ECAPu mogą być przetwarzane nieskończenie długie ele-menty, może on znaleźć szerokie zastosowanie w praktyce przemysłowej. Mechanizm rozdrobnienia ziarna podczas prze-róbki plastycznej stopów magnezu różni się znacząco od metali takich jak aluminium lub miedź i ich stopy. Ostatnie wyniki wskazują, że mechanizm rozdrobnienia ziarna podczas ECAPu jest sterowany przez proces rekrystalizacji dynamicznej, a ostateczna średnia wielkość ziarna jest zależna głównie od temperatury procesu.

W niniejszej pracy sprzężona metoda automatów komórkowych i elementów skończonych (cellular automata finite element – CAFE) została wykorzystana do opisu rozwoju mikrostruktury podczas czterech przejść ECAPu i jego inkre-mentalnego wariantu, I-ECAPu. Dynamika zmian w przestrzeni automatów komórkowych jest determinowana przez reguły przejścia, których parametrami są odkształcenie, prędkość od-kształcenia oraz temperatura – uzyskane z symulacji metodą elementów skończonych. Model zmiennej wewnętrznej opisuje wzrost całkowitej gęstości dyslokacji i przekazuje tę informację do przestrzeni automatów komórkowych. Opracowany model CAFE oblicza średnią wielkość ziarna oraz generuje cyfrowy obraz mikrostruktury, co może być przydatne w wyznaczaniu własności mechanicznych otrzymanego materiału.

Dopasowanie oraz weryfikacja modelu zostały wykonane przy wykorzystaniu wyników uzyskanych z przeprowadzonego procesu inkrementalnego ECAPu stopu magnezu AZ31B oraz danych literaturowych. Wyniki symulacji metodą CAFE zostały zweryfikowane dla zakresu temperatur 200-250°C oraz prędkości odkształcenia 0.01-0.5 s-1; uzyskano bardzo dobrą zgodność z wynikami eksperymentalnymi.

Received: September 20, 2012 Received in a revised form: November 4, 2012

Accepted: November 21, 2012

Page 170: CMMS_nr_2_2013

364 – 367 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

THE MIGRATION OF KIRKENDALL PLANE DURING DIFFUSION

BARTEK WIERZBA

Interdisciplinary Centre for Materials Modelling, FMSci&C, AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Kraków, Poland

*Corresponding author: [email protected]

Abstract

In this study some aspects of Kirkendall and lattice plane migration in binary diffusion couples are studied by means of numerical simulations by bi-velocity method. The bi-velocity method (Darken method) base on the postulate of the unique transport of the mass due to diffusion. The method deal with the 1) composition dependent diffusivities, 2) differ-ent partial molar volumes of components, 3) the stress field during the diffusion process and 4) entropy production. It is shown that the method allows for calculation the trajectory of the Kirkendall plane in binary diffusion couples. Key words: kirkendall plane, bi-velocity method, interdiffusion, trajectory

1. INTRODUCTION

In recent years, both experimental characteriza-tion and computer simulations have revealed many new phenomena that are not yet fully explained by existing theories and models. A list of examples include: (i) multiple Kirkendall planes (bifurca-tions), (ii) stability of individual Kirkendall plane, and (iii) discontinuity of the Kirkendall velocity at moving interphase boundaries (van Dal at al., 2001; van Loo at al., 1990).

The Kirkendall effect (Smigelskas & Kirkendall, 1947) always accompanies interdiffusion, manifests itself in many phenomena, the migration of inclu-sions inside the diffusion zone, the development of porosity, generation of stress and in plastic defor-mation of the material. These diffusion-induced processes are of concern in a wide variety of struc-tures including composite materials, coatings, weld junctions, thin-film electronic devices (Boettinger at al., 2007; Gusak, 2010).

While the Darken's treatment of diffusion has withstood the test of time, the efforts directed to-

wards its implementation into physics and thermo-dynamics are far from being accepted. The reason may be attributed to the inherent experimental diffi-culties involved in the measurement of material ve-locities. The rationalization and formal description of the Kirkendall effect are by no means trivial. Re-gardless of intensive work in this field, a number of fundamental questions still remains to be answered. Is the Kirkendall plane unique? In other words, could the inert particles placed at the initial contact interface migrate differently in the diffusion zone, so that two (or more) "Kirkendall planes" can be ex-pected? (van Dal at al., 2001).

Experiments confirm that the “fiducial markers” may have different trajectories. However the exist-ing methods dealing with Kirkendall trajectories do not quantify the bifurcations, trifurcations, nonstabil-ity and discontinuity (van Loo at al., 1990).

The bi-velocity method is a generalization of Darken method of interdiffusion. The method based on the rigorous mathematical derivation of mass, momentum and energy conservations (Danielewski

Page 171: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 365 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

& Wierzba, 2010). It allows to calculate the densi-ties, drift velocity, energy and entropy densities in multicomponent systems. The method can be used when the gradient of the mechano-chemical poten-tial (from fairly well known thermodynamic proper-ties) and diffusivities as a function of concentrations (from measured tracer diffusivities) are known. The method is limited to the axiatoric part of the stress tensor only, i.e. the rotations of the system are ne-glected (rot υ = 0). The purpose of this paper is to use the bi-velocity method to calculate the Kirken-dall trajectory (position of the Kirkendall plane).

2. THE BI-VELOCITY METHOD (DANIELEWSKI AT AL., 2010)

The core of the bi-velocity method is the mass balance equation:

, 1,2d driftii iJ J i

t x

, (1)

where i is the mass density; diJ and drift

iJ denote the diffusion and drift flux, respectively.

The diffusion flux, d di i iJ , in a case when

no external forces are considered is given by the Nernst-Planck equation (Nernst, 1889; Planck, 1890):

, 1,2d d chii i i i i i

DJ c c p iRT x

,

(2)

where di denote the diffusion velocity, ch

i is the chemical potential, Di the intrinsic diffusion coeffi-cient and i and Mi are partial molar volume and molecular mass of the i-th component, respectively; R and T are the gas constant and temperature, p de-note the pressure field acting on the components. The density i is related with concentration, i = Mici.

To calculate the drift flux the Volume Continuity Equation (VCE) is used. The differential form of VCE follow:

2

1

0d drifti i i i

iM

. (3)

The drift velocity, drift , of the mixture can be defined:

2

1

drift di i i i

iM

, (4)

The pressure, p, generated during diffusion pro-cess is described by the Cauchy stress tensor. Thus, the pressure evolution can be approximated from dilatation of ideal crystal:

2

13 1 2d

i i i ii

dp E Mdt v x

, (5)

where E and v are the Young modulus and Poisson ratio, respectively.

The bi-velocity allow also to calculate the ener-gy and entropy conservations. Assuming the time independent external forcing ( 0)extV t , the internal energy conservation law becomes:

d driftid drifti i

i i i i iu u c pt x x

(6)

where iu denote the internal energy. The overall internal energy, u, of the mixture can be calculated

from components counterparts 2

1i i

iu u

.

Finally, the entropy, s, when diffusion is not negligible can be calculated from partial Gibbs-Duhem relation, ch

i i i i iTs u M p and 2

1i i

is s

.

3. RESULTS

In this paper the three different methods of cal-culation of the position of Kirkendall plane in the binary A-B diffusion couples are shown and com-pared. Mainly, 1) the "curve method" (Aloke, 2004; Gusak, 2010), 2) the "trajectory method" and 3) "entropy approximation". The first two methods base on the drift velocity and its integral the last method assumes that the position of the Kirkendall plane is defined by the local maximas of the entropy distribution.

The "curve method". The Kirkendall velocity

can be calculated from the drift velocity, equation (4). Assuming the diffusion process in the binary system and that the partial molar volumes are con-stant and equal, i = j i, j, the drift velocity can be rewritten in the following form:

1 1 2 2drift d dc c (7)

Page 172: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 366 –

From Euler theorem the overall molar volume equal to the inverse of the overall concentration,

1 c . Thus:

1 1 2 2drift d dN N (8)

where the molar ratio, ii

cNc

. Substituting equa-

tion (2) the drift velocity in a binary system when pressure filed is neglected can be rewritten in the following form:

1 1 2 21 2

ch chdrift D DN N

RT x RT x

(9)

When the ideality sweeping statement is as-

sumed, i.e. lnch

i icRTx x

thus the drift velocity

has its final form:

11 2

drift ND Dx

(10)

In a diffusion controlled interaction, the Kirken-dall plane is a plane of initial contact moving at par-abolic dependence, thus the position of the Kirken-dall plane:

0,2

drift KK

dx xtdt t

(11)

where K is the positions of the Kirkendall plane at time t = t*. The position(s) of the Kirkendall plane(s) can be found at the point of intersection(s) between the drift velocity curve and the straight line calculat-ed from equation (11).

"The trajectory method". The position of the Kirkendall plane can also be calculated by following the marker during the diffusion process, equation (12).

2

1

2 1 ,t

driftK K

t

x t x t t dt (12)

In each time the new position of the Kirkendall plane is calculated.

"The entropy approximation". In this studies it is postulated that the local maxima's calculated on the entropy distribution curve shows the positions of Kirkendall planes (the most favored places) in the diffusion couples.

Figure 1 shows the comparison of presented above methods of calculating the position of Kirkendall plane. The data used to calculate the dif-fusion process are presented in table 1.

Fig. 1. a) The concentration profile in binary A-B diffusion couple; The comparisons of the position of Kirkendall plane (vertical line) by different calculation methods: b) velocity curve; c) trajectory and d) entropy bi-velocity method.

Page 173: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 367 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

Table 1. The data used in simulations of diffusion in binary A-B diffusion couple.

T = 1273K; t = 36000 s; thickness, d = 0.02 cm

A diffusion coefficient [cm2 s-1]

B diffusion coefficient [cm2 s-1]

NA·10-10 10-10

The presented simulation results were calculated

using the bi-velocity method. Figure 1 shows a) the concentration profile and comparison of the calcula-tion of Kirkendall plane where different methods were used: b) the velocity curve; c) trajectory meth-od and d) entropy bi-velocity method in a binary A-B diffusion couple.

Figure 1 shows that the three presented methods give similar results when only one position of the Kirkendall plane is expected.

4. CONCLUSIONS

The bi-velocity method for interdiffusion allows quantitative and qualitative description of the mass transport process in binary systems. The method based on the postulate that each component’s veloci-ty i must be divided into two parts: d

i the unique diffusion velocity, which depends on diffusion po-tential gradient and is independent of the choice of the reference frame; and the drift velocity drift . The drift velocity, common for all components, allows to calculate the position and trajectory of the Kirken-dall planes during diffusion process. The method effectively deals with (1) composition dependent diffusivities, (2) different partial molar volumes of components, (3) the stress field during the diffusion process and (4) entropy production. The model was applied for the modeling of the trajectory of the Kirkendall planes in a binary diffusional couples. The examples presented in this work show that the entropy curve can be used to approximate the posi-tion of the Kirkendall markers when the diffusion coefficients are composition dependant.

Acknowledgments. This work has been sup-

ported by the National Science Centre (NCN) in Poland, decision number DEC-2011/03/B/ST8/ 05970. The software CADiff available from author.

REFERENCES

Aloke, P., 2004, The Kirkendall effect in solid state diffusion, Technische Universiteit Eindhoven, Eindhoven

Boettinger, W. J., Guyer, J. E., Campbell, C. E. McFadden, G. B., 2007, Computation of the Kirkendall velocity and displacement fields in a one-dimensional binary diffu-sion couple with a moving interface, Proc. R. Soc. A, 463, 3347-3373.

Danielewski, M., Wierzba, B., 2010, Thermodynamically con-sistent bi-velocity mass transport phenomenology, Acta Mat. 58, 6717-6727.

Gusak, A. M., 2010, Diffusion-controlled Solid State Reactions In Alloys, Thin Films, and Nanosystems, Wiley-Vch Verlag GmbH & Co., Weinheim.

Nernst, W., 1889, Die elektromotorische Wirkamkeit der Ionen, Z. Phys. Chem. 4, 129-140 (in German).

Planck, M., 1890, Ber die potentialdierenz zwischen zwei verdnnten lsungen binrer elektrolyte, Annu Rev Phys Chem 40, 561-576 (in German).

Smigelskas, A.D. Kirkendall, E., 1947, Zinc Diffusion in Alpha Brass, Trans. AIME, 171, 130-142.

van Dal, M. J. H., Gusak, A. M., Cserhati, C., Kodentsov, A. A. van Loo F. J. J., 2001, Microstructural Stability of the Kirkendall Plane in Solid-State Diffusion, Phys. Rev. Lett. 86, 3352-3355.

van Loo, F. J. J., Pieraggi, B. Rapp, R. A., 1990, Interface mi-gration and the Kirkendall effect in diffusion-driven phase transformations, Acta Metall. Mater. 38, 1769-1779.

KRYTERIA EWOLUCJI PŁASZCZYZNY KIRKENDALLA PODCZAS PROCESU DYFUZJI

Streszczenie W artykule zaprezentowana została metoda dwu-prędkości

pozwalająca na wyznaczenie ewolucji płaszczyzny Kirkendalla podczas procesu dyfuzji. Metoda dwu-prędkości jest uogólnie-niem metody Darkena. Opiera się ona na całkowych prawach zachowania masy, pędu oraz energii. Algorytm obliczania tra-jektorii płaszczyzny Kirkendalla pozwala również na wyznacze-nie: 1) stężeń składników, 2) prędkości dryftu, 3) energii oraz 4) produkcji entropii w wieloskładnikowych fazach skondensowa-nych. Pokazano, że metoda pozwala na poprawne wyznaczenie położenia płaszczyzny Kirkendalla podczas procesu dyfuzji w układach dwu-składnikowych.

Received: October 25, 2012 Received in a revised form: November 21, 2012

Accepted: : December 12, 2012

Page 174: CMMS_nr_2_2013

368 – 374 ISSN 1641-8581

Publishing House A K A P I T

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materiałów

Vol. 13, 2013, No. 2

MODELING OF STATIC RECRYSTALLIZATION KINETICS BY COUPLING CRYSTAL PLASTICITY FEM AND MULTIPHASE

FIELD CALCULATIONS

ONUR GÜVENC1, THOMAS HENKE1, GOTTFRIED LASCHET2, BERND BÖTTGER2, MARKUS APEL2, MARKUS BAMBACH1*, GERHARD HIRT1

1 Institute of Metal Forming, RWTH Aachen University, Intzestrasse 10, D-52056 Aachen, Germany

2 ACCESS e.V., RWTH Aachen, Intzestrasse 5, D-52072 Aachen, Germany *Corresponding author: [email protected]

Abstract

In multi-step hot forming processes, static recrystallization (SRX), which occurs in interpass times, influences the microstructure evolution, the flow stress and the final product properties. Static recrystallization is often simply modeled based on Johnson-Mehl-Avrami-Kolmogorov (JMAK) equations which are linked to the visco-plastic flow behavior of the material. Such semi-empirical models are not able to predict the SRX grain microstructure. In this paper, an approach for the simulation of static recrystallization of austenitic grains is presented which is based on the coupling of a crystal plasticity method with a multiphase field approach. The microstructure is modeled by a representative volume element (RVE) of a homogeneous austenitic grain structure with periodic boundary conditions. The grain microstructure is gener-ated via a Voronoi tessellation. The deformation of the RVE, considering the evolution of grain orientations and disloca-tion density, is calculated using a crystal plasticity finite element (CP-FEM) formulation, whose material parameters have been calibrated using experimental flow curves of the considered 25MoCrS4 steel. The deformed grain structure (disloca-tion density, orientation) is transferred to the FDM grid used in the multiphase field approach by a dedicated interpolation scheme. In the phase field calculation, driving forces for static recrystallization are calculated based on the mean energy per grain and the curvature of the grain boundaries. A simplified nucleation model at the grain level is used to initiate the recrystallization process. Under these assumptions, it is possible to approximate the SRX kinetics obtained from the stress relaxation test, but the grain morphology predicted by the 2d model still differs from experimental findings. Key words: static recrystallization, crystal plasticity FEM, multi-phase field method, hot forming, periodic microstructure modeling

1. INTRODUCTION

Microstructural changes play a major role in hot working processes, not only because the microstruc-ture defines force requirements for forming through the flow stress but also since the microstructure de-fines final product properties. Static recrystallization (SRX) is one of the most dominant mechanisms during inter-pass periods of hot rolling or forging processes and it is a common practice to model its kinetics using Johnson-Mehl-Avrami-Kolmogorov

(JMAK) type equations. However, commonly used models such as those proposed by Sellars (1990) lack spatial resolution. They disregard effects of grain topology, misorientations and local accumula-tions of deformation. Operating on the macro-level, they assume that the microstructure associated with a material point can be described by average values of grain size, dislocation density or even strain. Cal-culation strategies based on Monte Carlo Potts (Raabe, 1999), cellular automata (Gawad et al., 2008), vertex (Piekos et al., 2008) and multi-phase

Page 175: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 369 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

field methods (Takaki & Tomita, 2010) were pro-posed as an attempt to capture the heterogeneities at the micro level. Among those methods, the phase field method offers a promising approach for model-ing static recrystallization after plastic deformation due to its implicit definition of the grain boundaries as a diffusive interface. This simplifies the simula-tion of interface migration, avoiding the complexity of handling their topology one-by-one. In addition, its theoretical foundation on irreversible thermody-namics allows for the implementation of models based on the minimization of the free energy func-tional of the polycrystalline microstructure (Stein-bach, 2009).

In order to take advantage of the phase field ap-proach in an SRX model effectively, a representative initial (i.e. deformed) state of the microstructure is a necessary starting condition. In recent years, crys-tal plasticity finite element (CP-FEM) simulations have gained momentum and have now reached a level of high predictive quality. The possibility to implement grain-scale flow stress evolution models and to derive intra- and inter-grain crystalline inter-actions during deformation enables the generation of a representative deformed microstructure for a phase field simulation (Roters et al., 2010).

However, the results of intricate models with spatial resolution are often not compared to experi-mental results. In this paper, the microstructural evolution of a commercial steel grade (25MoCrS4) during SRX after a hot uniaxial compression test is simulated by coupling CP-FEM calculations and phase field simulations. A 2d microstructure is gen-erated via a Voronoi algorithm, used to set up a CP-FEM model with random grain orientations, and subjected to uniaxial compression. The results of CP-FEM simulation are mapped onto the finite dif-ference grid of the multi-phase field SRX simula-tion. Finally, predicted values are critically com-pared with the results of stress relaxation test.

2. EXPERIMENTAL ANALYSIS

2.1. Material The steel grade 25MoCrS4, a case hardening

steel for gearing applications for automotive and aerospace industry, was selected as application material. Its chemical composition is given in Table 1.

2.2. Compression tests The hardening response of the material was ob-

tained through a set of compression tests at 1100°C and at five different strain rates: = 0.01, 0.1, 1, 10 100 s-1. Exact procedures of sample preparation and experimental methodology are described else-where (Henke et al., 2011). Fig. 1 shows defor-mation response of the material under uniaxial com-pression at 1100°C.

2.3. Stress relaxation tests

The SRX kinetics of the material were examined

by stress relaxation tests. The compression test spec-imens (without lubrication pockets) were deformed to a pre-strain below the critical strain for dynamic recrystallization (DRX) at the different strain rates. After the predefined strain level was reached, the cross-head of the servo-hydraulic testing machine was kept at constant height and the force response of the specimen was measured over time. The decrease of the reaction force and the respective stress values were then converted to recrystallized volume frac-tions (XRX) according to the procedure described by (Gronostajski et al., 1983). Once the time evolution of XRX is known, JMAK kinetics of SRX can be evaluated by determining the unknown parameters of the modified Avrami equation:

0.5

1 exp ln 2n

RXtX

t

, (1)

in which n is the Avrami exponent and 0.5t denotes the time required to reach 50% recrystallization. For the given values of XRX(t) and t0.5, an Avrami expo-nent of n = 0.56 was determined by regression for the test case strain: = 0.2, strain rate: = 0.01 s-1 at T = 1100°C.

Light optical microscopy (LOM) was used be-fore and after SRX in order to determine the grain size evolution and nucleation site preference. For details of the sample preparation we refer to Xiong et al. (2011). LOM results show that the average grain sizes before and after the SRX are 36 µm and 7 µm, respectively, for all considered strain rates.

Table 1. Chemical composition of 25MoCrS4 (1.7326) according to DIN 17210 (Values are in wt. %).

Grade C Mn Si Cr Mo

25MoCrS4 0.23 – 0.29 0.60 – 0.90 0.15 – 0.40 0.40 – 0.50 0.40 – 0.50

Page 176: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 370 –

In addition, nucleation sites of the SRX are not found inside the grains, but on the grain boundaries.

3. MICROSTRUCTURE MODEL

3.1. Deformation model In order to simulate the plastic hardening behav-

ior, the well-known phenomenological deformation law by Hutchinson (1976) is used within the simulation software DAMASK (Roters et al., 2012). The law is defined by

1

0 ,m

c

sgn

(2)

1,

n

c h

(3)

where denotes the active slip system, is the

shear rate at the active slip system, 0 is any conven-ient reference shear rate, and c

denote the stress state and the critical resolved shear stress on the active slip system, m characterizes the strain rate sensitivity and finally h is the function defining the incremental value of c

in terms of shear increments on a chosen slip system , which can be calculated using the equation

0 ,1

a

c

sat

h h

(4)

In equation (4), 0h , a and sat are material pa-rameters (Kalidindi et al., 1992). The parameter set can be calibrated with the compression test results.

The macro-scale stress-strain curve can be converted to its micro-scale counterpart by the method pro-posed by Taylor (1938). If the material is assumed to be isotropic throughout the deformation process (i.e. Taylor factor M = 3), initial and saturation values of the slip resistance can be determined (0 = 8 MPa, sat = 16 MPa) and the other model parameters can be calculated numerically (h0 = 300 MPa, a = 2). Fig. 1 shows the comparison of experimental and numerical responses of the material. Note that each experiment has been repeated five times in order to take the experimental scatter into account.

3.2. Coupling

Three types of data have to be mapped from the

CP-FEM output to the phase field simulation: The grain index, the mean grain orientation and the mean stored energy per grain. However, the transfer of data from a FEM mesh to a FDM grid requires a dedicated interpolation scheme. In order to transfer the grain index and the grain orientation data from nodes to the grid, it is assumed that each grid point has the index and orientation value of its nearest neighboring node as shown in Fig. 2.

In addition, the local orientations have to be av-eraged after the mapping to determine the mean grain orientations using circular statistics (Berens, 2009). The stored energy of the deformation can be calculated from the flow stress increase using the equations

0c Gb , (5)

2

dE Gb , (6)

Fig. 1. Comparison of CPFEM and compression test results of 25MoCrS4 at 1100°C at various strain rates.

Page 177: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 371 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

20c

dEG

. (7)

In the equations (5), (6) and (7), dE is the stored

energy due to deformation, defines a proportional-ity constant, ( )G T is the temperature dependent shear modulus, c and 0 are the final and initial val-ues of the shear resistance (Taylor, 1934). Then, the nodal energies are mapped onto the grid points.

Fig. 2. The index and orientation of a node(circle) is assigned to all grid points (square) which are far away from the other nodes.

Fig. 3. The energy Ed of each grid point (GP) is determined by interpolating that of the nodes. Influence of each node is in-versely proportional to its distance to an individual GP.

In the mapping, the interpolated values at the gird points are obtained as the weighted sum of ad-jacent finite element mesh nodes. The weights are inversely proportional to the relative distance be-tween grid point and finite element node, as illus-trated in Fig. 3.

Finally, the mean stored energy per grain is found by calculating their arithmetic mean on the grain area. Note that, at the moment no dislocation

density gradients inside the grains are taken into account.

3.3. Recrystallization model

For an isothermal, heterogeneous system, grain

evolution can be modeled by minimization of the free energy functional – under the assumption of a double obstacle function – which leads to the popu-lar formulation of multi-phase field method, ex-pressed via equation (8) (Eiken et al., 2006).

(8)

In equation (8), is the interface thickness, mij is the grain boundary mobility, σij denotes the inter-facial energy between adjacent grain boundaries, ij is related to the grain boundary curvature and ΔGij is the contribution of the stored energy (Ed). Therefore, equation (8) models the combined effects of curva-ture and stored energy on the interface migration. In addition, the nucleation of new grains was assumed to take place at interfaces and triple junctions with site saturation as initial condition.

4. SIMULATION AND RESULTS

4.1. Deformation Isothermal uniaxial compression at a constant

strain rate (T = 1100oC and = 0.01 s-1) under ho-mogenous boundary conditions was simulated with periodic digital microstructure generated by a planar Voronoi tessellation of 25 randomly oriented grains. In order to avoid the occurrence of DRX, a maxi-mum strain of 0.2 was imposed in the stress relaxa-tion experiments, and the same pre-strain was used in the model. Evolution of slip resistance and miso-rientation are shown in Fig. 4.

4.2. Recrystallization

After the deformation simulation, the local

stored energies were converted to mean energies per grain using equation (7) with G = 32.2 GPa and α = 0.54 and converted to the FDM grid. In the SRX simulation was taken to be 1.5 µm, the surface

22 2

22[ ( ( ))i ij ij i j j i i j

j

ij

m

]i j ijG

Page 178: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 372 –

a)

b)

Fig. 4. (a) c and (b) misorientation at the end of deformation.

energy was set to 3e-7 Jcm-2 and the mobility was assumed to be 5e-3 cm4/Js. Nucleation was restricted to occur at the grain boundary interfaces and at triple junctions and only at sites where the stored energy exceeds 2.5e-2 Jcm-3. When these values are chosen, the SRX kinetics and grain sizes of the phase field simulation are in good correlation with the experi-mental values as seen in Fig. 5 and Table 2. Howev-er, the Avrami exponent calculated from the phase field model is observed to be larger than 1 in con-trast to the exponent from the experiment.

The evolution of the morphology of the de-formed microstructure during the SRX was also predicted. It was found that with the aforementioned parameter set, the recrystallized grain front is direc-tional which results in a cuboidal final microstruc-ture as shown in Fig. 6.

Table 2. Comparison of mean grain sizes calculated by LOM and CPFEM-PF after recrystallization

Mean grain diameter / µm LOM CPFEM-PF

After recrystallization 7 8.2

Fig. 5. XRX kinetics of CPFEM-PF-simulation, JMAK model and the stress relaxation experiment.

(a) t = 0.1 s (b) t = 1 s (c) t = 100 s

Fig. 6. Progression of recrystallization through time: (a) Growth of nuclei on the interfaces, (b) their competitive growth towards non-recrystallized grains and (c) the fully recrystallized microstructure.

Page 179: CMMS_nr_2_2013

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 373 –

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

5. DISCUSSION

5.1. CP-FEM and coupling The microstructural deformation of 25MoCrS4

is successfully modeled by CPFEM using a well-established phenomenological hardening law. Mate-rial parameters are calibrated using the experimental results at the macro-scale. Even though the model is phenomenological, the inter-/intra-grain scatter of the grain orientation and hardening are captured and mapped onto finite difference grid effectively. However, due to the usage of the mean orientation and stored energy per grain in the phase field simu-lation, the scatter is unintentionally averaged during mapping which results in a loss of resolution. This problem can be solved in the future by accounting for dislocation gradients and abnormal sub-grain growth in order to improve the simple nucleation model at the grain level.

5.2. Static recrystallization

The SRX kinetics and grain size calculated with

the phase field method show a good correlation with that of the stress relaxation experiments. The mor-phological evolution of the microstructure is found to be directional, resulting in unrealistic cuboidal final grain geometry. Due to the low experimental value of the Avrami exponent (n < 1), it was neces-sary in this model to fill all possible nucleation sites. The usage of a 2D model and the assumption of site saturation restrict the number of nucleation sites (i.e. number of sites per interface) that is available. An extension to 3D space would increase the number of possible nucleation sites per interface and make the predicted grain morphology more realistic.

6. CONCLUSION

In this work a coupled CP-FEM-phase field model based on the mean stored deformation energy and grain boundary curvature is applied to predict SRX kinetics. This mean energy per grain serves as driving force in the recrystallization simulation with the multi-phase field approach. In contrast to con-ventional JMAK based statistical models, the ap-plied model comprises only physical quantities, e.g. surface energy and interfacial mobility which allow for physical interpretation. Moreover, being a spa-tially resolved method, the phase field method takes the heterogeneity of the stored energy and boundary curvature into account.

A dedicated mapping scheme was used to couple the multi-phase field model with a CPFEM defor-mation model whose deformation conditions corre-spond to experimental results. The mapping algo-rithm averages out local gradients and should be improved in the future. The nucleation mechanism of the 2-D model leads to a rather unrealistic grain shape when the model is adjusted to the experimen-tally obtained SRX kinetics. In spite of the good correlation between model and experiment, a 3-D model with an improved nucleation model at the grain level is necessary to predict the SRX kinetics and the final grain shape more accurately.

Acknowledgements. The authors gratefully

acknowledge the financial support of the Deutsche Forschungsgemeinschaft (DFG) for the support of the depicted research within the Cluster of Excel-lence “Integrative Production Technology for High Wage Countries”.

REFERENCES

Berens, P., 2009, A MATLAB Toolbox for Circular Statistics. Journal of Stat. Software, 31, 1-21.

Eiken, J., Böttger, B., Steinbach, I., 2006, Multiphase-field approach for multicomponent alloys with extrapolation scheme for numerical application, Physical Review E, 73, 1-9.

Gawad, J., Madej, W., Kuziak, R., Pietrzyk, M., 2008, Mul-tiscale model of dynamic recrystallization in hot rolling, International Journal of Material Forming, 1, 69-72.

Gronostajski, J., Pulit, E., Ziemba, H., 1983, Recovery and recrystallization of Cu after hot deformation, Metal Sci-ence Journal , 17, 348-352.

Henke, T., Bambach, M., Hirt, G., 2011, Experimental Uncer-tainties affecting the Accuracy of Stress-Strain Equa-tions by the Example of a Hensel-Spittel Approach, 14th International ESAFORM Conference on Material Form-ing: ESAFORM 2011, Belfast, 71-77.

Hutchinson, J.W., 1976, Bounds and Self-Consistent Estimates for Creep of Polycrystalline Materials. Proceedings of the Royal Society of London. A. Mathematical and Phys-ical Sciences, 348, 101‐127.

Kalidindi, S., Bronkhorst, C., Anand, L., 1992, Crystallographic texture evolution in bulk deformation processing of FCC metals, Journal of the Mechanics and Physics of Solids, 40, 537‐569.

Piekos, K., Tarasiuk, J., Wierzbanowski, K., Bacroix, B., 2008. Use of stored energy distribution in stochastic vertex model, Materials Science Forum, 571-572, 231-236.

Raabe, D., 1999, Introduction of a scalable three-dimensional cellular automaton with a probabilistic switching rule for the discrete mesoscale simulation of recrystallization phenomena, Philosophical magazine A-Physics of Con-densed Matter Structure Defects and Mechanical Prop-erties, 79, 2339‐2358.

Roters, F., Eisenlohr, P., Hantcherli, L., Tjahjanto, D., Bieler, T., Raabe, D., 2010, Overview of constitutive laws, kin-ematics, homogenization and multiscale methods in crystal plasticity finite-element modeling: Theory, ex-periments, applications, Acta Materialia, 58, 1152-1211.

Page 180: CMMS_nr_2_2013

CO

MPU

TER M

ETH

OD

S IN

MA

TERIA

LS S

CIE

NC

E

INFORMATYKA W TECHNOLOGII MATERIAŁÓW

– 374 –

Roters, F., Eisenlohr, P., Kords, C., Tjahjanto, D., Diehl, M., Raabe, D., 2012, DAMASK: the Düsseldorf Advanced MAterial Simulation Kit for studying crystal plasticity using an FE based or a spectral numerical solver, Proce-dia IUTAM, 3, 3-10.

Sellars, C.M., 1990, Modelling Microstructural Development during Hot Rolling, Mats. Sci. Tech, 6, 1072-1081.

Steinbach, I., 2009, Phase-field models in materials science, Modelling and Simulation in Materials Science and En-gineering, 17, 1-31.

Takaki, T., Tomita, Y., 2010, Static recrystallization simulations starting from predicted deformation microstructure by coupling multi-phase-field method and finite element method based on crystal plasticity, International Journal of Mechanical Sciences, 52, 320-328.

Taylor, G.I., 1934, The Mechanism of Plastic Deformation of Crystals. Part I. Theoretical. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sci-ences, 145, 362-387.

Taylor, G.I., 1938, Plastic Strain in Metals, J. Inst. Met, 62, 307‐324.

Xiong, W., Wietbrock, B., Saeed-Akbari, A., Bambach, M., 2011, Modeling the Flow Behavior of a High-Manganese Steel Fe-Mn23-C0.6 in Consideration of Dynamic Recrystallization, Steel Research Internation-al, 82, 127-136.

MODELOWANIE KINETYKI REKRYSTALIZACJI STATYCZNEJ POPRZEZ SPRZĘŻENIE

PLASTYCZNOŚCI KRYSZTAŁÓW MES Z OBLICZENIAMI PÓL WIELOFAZOWYCH

Streszczenie W wielostopniowych procesach obróbki plastycznej, rekry-

stalizacja statyczna (ang. static recrystallization - SRX) wystę-pująca w czasach przerw między odkształceniami, wpływa na rozwój mikrostruktury, naprężenie uplastyczniające oraz wła-ściwości gotowego produktu. Statyczna rekrystalizacja jest często modelowana korzystając z równania Johnson-Mehl-Avrami-Kolmogorov (JMAK), które jest powiązane z lepko-plastycznym płynięciem materiału. Taki pół-empiryczny model nie jest w stanie przewidzieć mikrostruktury ziaren dla SRX. W niniejszym artykule przedstawiono podejście do symulacji statycznej rekrystalizacji austenitu wykorzystujące połączenie plastyczności kryształów z metodą pola wielofazowego. Mikro-struktura jest modelowana za pomocą reprezentatywnych ele-mentów objętości (ang: Representative Volume Element - RVE) jednorodnej struktury ziaren austenitu z okresowymi warunkami brzegowymi. Mikrostruktura jest generowana za pomocą wielo-boków Voronoi. Obliczenia odkształcenia RVE są prowadzone połączonymi metodami plastyczności kryształów i MES, z uwzględnieniem rozwoju orientacji ziaren oraz gęstości dyslo-kacji. Parametry modelu materiału wyznaczono na podstawie doświadczalnych krzywych płynięcia dla stali 25MoCrS4. Od-kształcona struktura ziaren (gęstość dyslokacji, orientacja) jest przekazywana do siatki różnic skończonych w modelu pola wielofazowego stosując metodę interpolacji. W obliczeniach pola faz, siły pędne dla statycznej rekrystalizacji są obliczane na podstawie średniej energii w ziarnie i krzywizny granic ziaren. W celu zainicjowania rekrystalizacji stosowany jest uproszczo-ny model zarodkowania na poziomie ziarna. Przy tych założe-niach możliwe było oszacowanie kinetyki SRX na podstawie badań relaksacji naprężeń. Z drugiej strony przewidywana w modelu 2D morfologia ziaren wciąż odbiega od wyników doświadczalnych.

Received: September 21, 2012 Received in a revised form: October 29, 2012

Accepted: November 3, 2012