9
From the SelectedWorks of Dr. Sandip Kumar Lahiri December 2008 Prediction of Pressure drop of Slurry Flow in Pipeline by Hybrid Contact Author Start Your Own SelectedWorks Notify Me of New Work Available at: hp://works.bepress.com/sandip_lahiri/4

Prediction of Pressure Drop of Slurry FLow in Pipeline by Hybrid (...) (Ej) [LAHIRI, S.K.; GHANTA, K.C.] [Chinese J. of Ch. Eng. Vol 16 n6; 2008] {9s}

  • Upload
    rmm

  • View
    9

  • Download
    1

Embed Size (px)

DESCRIPTION

Prediction of Pressure Drop of Slurry FLow in Pipeline by Hybrid (...) (Ej) [LAHIRI, S.K.; GHANTA, K.C.] [Chinese J. of Ch. Eng. Vol 16 n6; 2008] {9s}

Citation preview

Page 1: Prediction of Pressure Drop of Slurry FLow in Pipeline by Hybrid (...) (Ej) [LAHIRI, S.K.; GHANTA, K.C.] [Chinese J. of Ch. Eng. Vol 16 n6; 2008] {9s}

From the SelectedWorks of Dr. Sandip KumarLahiri

December 2008

Prediction of Pressure drop of Slurry Flow inPipeline by Hybrid

ContactAuthor

Start Your OwnSelectedWorks

Notify Meof New Work

Available at: http://works.bepress.com/sandip_lahiri/4

Page 2: Prediction of Pressure Drop of Slurry FLow in Pipeline by Hybrid (...) (Ej) [LAHIRI, S.K.; GHANTA, K.C.] [Chinese J. of Ch. Eng. Vol 16 n6; 2008] {9s}

Chinese Journal of Chemical Engineering, 16(6) (2008)

Prediction of Pressure drop of Slurry Flow in Pipeline by Hybrid Support Vector Regression and Genetic Algorithm Model

S.K. Lahiri* and K.C. Ghanta Department of Chemical Engineering, NIT, Durgapur, West Bengal, India

Abstract This paper describes a robust support vector regression (SVR) methodology, which can offer superior performance for important process engineering problems. The method incorporates hybrid support vector regression and genetic algorithm technique (SVR-GA) for efficient tuning of SVR meta-parameters. The algorithm has been applied for prediction of pressure drop of solid liquid slurry flow. A comparison with selected correlations in the lit-erature showed that the developed SVR correlation noticeably improved the prediction of pressure drop over a wide range of operating conditions, physical properties, and pipe diameters. Keywords support vector regression, genetic algorithm, slurry pressure drop

1 INTRODUCTION

Conventionally, two approaches namely phe-nomenological (first principles) and empirical, are employed for slurry flow modeling. The advantages of a phenomenological model are: (i) it represents physico- chemical phenomenon underlying the process explic-itly and (ii) it possesses extrapolation ability. Owing to the complex nature of many multiphase phase slurry processes, the underlying physico-chemical phe-nomenon is seldom fully understood. Also, collection of the requisite phenomenological information is costly, time-consuming and tedious, and therefore de-velopment and solution of phenomenological process models poses considerable practical difficulties. These difficulties necessitate exploration of alternative mod-eling formalisms.

In the last decade, artificial neural networks (ANNs) and more recently support vector regression (SVR) have emerged as two attractive tools for nonlinear modeling especially in situations where the development of phenomenological or conventional regression models becomes impractical or cumber-some. In recent years, support vector regression (SVR) [1-3] which is a statistical learning theory based ma-chine learning formalism is gaining popularity over ANN due to its many attractive features and promis-ing empirical performance.

The main difference between conventional ANNs and support vector machines (SVM) lies in the risk minimization principle. Conventional ANNs imple-ment the empirical risk minimization (ERM) principle to minimize the error on the training data, while SVM adheres to the Structural Risk Minimization (SRM) principle seeking to set up an upper bound of the gen-eralization error [1].

This study is motivated by a growing popularity of support vector machines (SVM) for regression problems. Although the foundation of the SVR para-digm was laid down in mid 1990s, its chemical engi-neering applications such as fault detection [4, 5] have emerged only recently.

However, many SVM regression application studies are performed by ‘expert’ users having good understanding of SVM methodology. Since the quality of SVM models depends on a proper setting of SVM meta-parameters, the main issue for practitioners try-ing to apply SVM regression is how to set these pa-rameter values (to ensure good generalization per-formance) for a given data set. Whereas existing sources on SVM regression give some recommenda-tions on appropriate setting of SVM parameters, there is clearly no consensus and (plenty of) contradictory opinions.

Conventionally, various deterministic gradient based methods [6] are used for optimizing a process model. Most of these methods however require that the objective function should be smooth, continuous, and differentiable. The SVR models can not be guar-anteed to be smooth especially in regions wherein the input-output data (training set) used in model building is located sparsely. In such situations, an efficient op-timization formalism known as Genetic algorithm (GA), which is lenient towards the form of the objec-tive function, can be used. The GA was originally de-veloped as the genetic engineering models mimicking population evolution in natural systems and has been extensively used in chemical engineering [7-11].

In the present paper, SVR formalism is integrated with genetic algorithm to automatically determine the optimal meta-parameters of SVR with the highest predictive accuracy and generalization ability simul-taneously. The strategy (henceforth referred to as “SVR-GA”) uses an SVR as the nonlinear process modeling paradigm, and the GA for optimizing the meta-parameters of the SVR model such that an im-proved prediction performance is realized. To our knowledge, the hybrid involving SVR and GA is be-ing used for the first time for chemical process mod-eling and optimization. In the present work, we pro-pose a hybrid support vector regression- genetic algo-rithm (SVR-GA) approach for tuning the SVR meta-parameters and illustrate it by applying it for predicting the pressure drop of solid liquid slurry flow.

Received 2008-04-18, accepted 2008-08-16.

* To whom correspondence should be addressed. E-mail: [email protected]

Page 3: Prediction of Pressure Drop of Slurry FLow in Pipeline by Hybrid (...) (Ej) [LAHIRI, S.K.; GHANTA, K.C.] [Chinese J. of Ch. Eng. Vol 16 n6; 2008] {9s}

Chin. J. Chem. Eng., Vol. 16, No. 6, December 2008 2

2 SUPPORT VECTOR REGRESSION (SVR) MODELING

2.1 Support vector regression: at a glance

Consider a training data set 1 1{( , ),y=g x 2 2 P P( , ), ( , )}y yx x , such that N

i υ∈x is a vector of input variables and iy υ∈ is the corresponding scalar output (target) value. Here, the modeling objective is to find a regression function, ( )y f= x , such that it accurately predicts the outputs {y} corresponding to a new set of input-output examples, {(x, y)}, which are drawn from the same underlying joint probability dis-tribution as the training set. To fulfill the stated goal, SVR considers following linear estimation function.

( ) ,f x bx= +w (1)

where w denotes the weight vector; b refers to a con-stant known as “bias”; f(x) denotes a function termed feature, and , xw represents the dot product in the feature space, l, such that : ,l l→ ∈x wφ . The basic concept of support vector regression is to map nonlinearly the original data x into a higher dimen-sional feature space and solve a linear regression problem in this feature space [1-3].

The regression problem is equivalent to minimize the following regularized risk function:

( ) 2

1

1 1( ) ( )2

n

i ii

R f L f yn =

= − +∑ x w (2)

where , for( ) ( )

( ( ) )0, otherwise

f y f x yL f y

ε ε−⎧ − −− = ⎨

≥xxx

(3) Equation (3) is also called ε-insensitive loss function. This function defines a ε-tube. If the predicted value is within the ε-tube, the loss is zero. If the predicted value is outside the tube, the loss is equal to the mag-nitude of the difference between the predicted value and the radius ε of the tube. ε is a precision parameter representing the radius of the tube located around the regression function (see Fig. 1); the region enclosed by the tube is known as “ε-intensive zone”.

Figure 1 A schematic diagram of support vector regres-sion using ε-sensitive loss function [4] ○ data points; points outside tube; support vectors; ● fitted by SVR

The SVR algorithm attempts to position the tube around the data as shown in Fig. 1. By substituting the ε-insensitive loss function into Eq. (2), the optimiza-

tion object becomes:

Minimize ( )2 *

1

12

n

ii

Cw ξ ξ=

+ +∑ (4)

subject to *

*

,

,

, 0

i i i

i i i

i i

y b

b y

ε ξ

ε ξ

ξ ξ

⎧ − − +⎪

+ − +⎨⎪⎩

w x

w x (5)

where the constant 0C > stands for the penalty de-gree of the sample with error exceeding ε. Two posi-tive slack variables *,i iξ ξ represent the distance from actual values to the corresponding boundary values of ε-tube. The SVR fits f(x) to the data in a manner such that: (i) the training error is minimized by minimizing

*,i iξ ξ and, (ii) w2 is minimized to increase the flatness of f(x) or to penalize over complexity of the fitting function. A dual problem can then be derived by using the optimization method to maximize the function:

maximize ( )( )( )

( ) ( )

**

, 1

* *

1 1

1 ,2

n

i jj ji ii j

n n

ii i i ii i

y

α αα α

ε α α α α

=

= =

− −−−

++ −

∑ ∑

x x

(6)

subject to ( )*

10

n

i ii

α α=

=−∑ and *0 ,i i Cα α≤ ≤ (7)

where *,i iα α are Lagrange multipliers. Owing to the specific character of the above-described quadratic programming problem, only some of the coefficients, ( *

i iα α− ) are non-zero and the corresponding input vectors, xi, are called support vectors (SVs). The SVs can be thought of as the most informative data points that compress the information content of the training set. The coefficients α and α* have an intuitive inter-pretation as forces pushing and pulling the regression estimate f(xi) towards the measurements, yi.

The SVM for function fitting obtained by using the above-mentioned maximization function is then given by

( )*

1( ) ,

n

ii ii

f bα α=

= +−∑x x x (8)

As for the nonlinear cases, the solution can be found by mapping the original problems to the linear ones in a characteristic space of high dimension, in which dot product manipulation can be substituted by a kernel function, i.e. ( , ) ( ) ( )i j i jK ϕ ϕ=x x x x . In this work, different kernel function is used in the SVR. Substi-tuting ( , ) ( ) ( )i j i jK ϕ ϕ=x x x x in Eq. (6) allows us to reformulate the SVM algorithm in a nonlinear para-digm. Finally, we have

( )*

1( ) ,

n

ii ii

f K bα α=

= +−∑x x x (9)

2.2 Performance estimation of SVR

It is well known that SVM generalization per-formance (estimation accuracy) depends on a good

Page 4: Prediction of Pressure Drop of Slurry FLow in Pipeline by Hybrid (...) (Ej) [LAHIRI, S.K.; GHANTA, K.C.] [Chinese J. of Ch. Eng. Vol 16 n6; 2008] {9s}

Chin. J. Chem. Eng., Vol. 16, No. 6, December 2008 3

setting of meta-parameters parameters C, ε, and kernel parameters such as kernel type, loss function type and the kernel parameters. The problem of optimal pa-rameter selection is further complicated by the fact that SVM model complexity (and hence its generali-zation performance) depends on all five parameters.

Selecting a particular kernel type and kernel function parameters is usually based on applica-tion-domain knowledge and also should reflect distri-bution of input (x) values of the training data. Pa-rameter C determines the trade off between the model complexity (flatness) and the degree to which devia-tions larger than ε are tolerated in optimization for-mulation. For example, if C is too large (infinity), then the objective is to minimize the empirical risk only, without regard to model complexity part in the opti-mization formulation.

Parameter ε controls the width of the ε-insensitive zone, used to fit the training data [1, 3, 12]. The value of ε can affect the number of support vectors used to construct the regression function. The bigger ε, the fewer support vectors are selected. On the other hand, bigger ε-values result in more ‘flat’ estimates. Hence, both C and ε-values affect model complexity (but in a different way).

To minimize the generalization error, these pa-rameters should be properly optimized. As evident from the literature, there is no shortage of (conflicting) opinions on optimal setting of SVM regression pa-rameters and many such approaches to the choice of C and ε are summarized in Cherkassky et al [13]. All the practical approaches rely on noise estimation in train-ing data which may possible for electrical signals but very difficult for chemical engineering data or process parameters. Existing software implementations of SVM regression usually treat SVM meta-parameters as user-defined inputs. For a non-expert user it is very difficult task to choose these parameters as he has no prior knowledge for these parameters for his data. In such a situation, user normally relies on trial and error method. Such an approach apart from consuming enormous time may not really obtain the best possible performance. In this paper we present a hybrid SVR-GA approach, which not only relieve the user from choosing these meta-parameters but also find out the optimum values of these parameters to minimize the generalization error.

3 GENETIC ALGORITHM: AT A GLANCE

Genetic Algorithms [14] combine the “survival of the fittest” principle of natural evolution with the ge-netic propagation of characteristics, to arrive at a ro-bust search and optimization technique. Principal fea-tures possessed by the GAs are: (i) they are zeroth order optimization methods requiring only the scalar values of the objective function, (ii) capability to han-dle nonlinear, complex and noisy objective functions, (iii) they perform global search and thus are more likely to arrive at or near the global optimum, and (iv) their search procedure being stochastic, GAs do not impose pre-conditions, such as smoothness, differen-tiability and continuity on the objective function form.

Owing to these attractive features, GAs is being used for solving diverse optimization problems in chemical engineering (see e.g., [7, 8, 10, 11, 14]) .

In the GA procedure, the search for an optimal solution (decision) vector, x*, begins from a randomly initialized population of probable (candidate) solu-tions. The solutions, usually coded in the form of bi-nary strings (chromosomes), are then tested to meas-ure their fitness in fulfilling the optimization objective. Subsequently, a main loop comprising following op-erations is performed: (i) selection of better (fitter) parent chromosomes, (ii) production of an offspring solution population by crossing over the genetic mate-rial between pairs of the fitter parent chromosomes, and (iii) mutation (bit-flipping) of the offspring strings. Implementation of this loop generates a new popula-tion of candidate solutions, which as compared to the previous population, usually fares better at fulfilling the optimization objective. The best string that evolves after repeating the above described loop till conver-gence, forms the solution to the optimization problem.

4 GA-BASED OPTIMIZATION OF SVR MOD-ELS

There are different measures by which SVM per-formance is assessed, but validation and leave-one-out error estimates are the most commonly used one. Here we divide the total available data as training data (75% of data) and test data (25 % data chosen ran-domly) on which the SVR performance is assessed.

The statistical analysis of SVR prediction is based on the following performance criteria:

(1) The average absolute relative error (A) on test data should be minimized:

pred exp

exp1

1 N y yA

yN

−⎛ ⎞= ⎜ ⎟⎜ ⎟

⎝ ⎠∑

(2) The standard deviation of error (σ) on test data should be minimum:

( )2

pred, exp, exp,1

1 /1

N

i i iy y AyN

σ ⎡ ⎤−= −⎣ ⎦− ∑

(3) The cross-correlation coefficient (R) between input and output should be around unity:

( )( )

( ) ( )

exp, exp pred, pred1

2 2exp, exp pred, pred

1 1

N

i ii

N N

i ii i

y y y yR

y y y y

=

= =

− −=

− −

∑ ∑

SVR learning is considered successful only if the system can perform well on test data on which the system is not trained. The above five parameters of SVR are optimized by GA algorithm stated below.

The objective function and the optimization problem of SVR model of the present study are repre-sented as

{ }1 2 3 4 5min ( )on test set, , , , ,A x X x x x x x⎡ ⎤∈⎣ ⎦

Page 5: Prediction of Pressure Drop of Slurry FLow in Pipeline by Hybrid (...) (Ej) [LAHIRI, S.K.; GHANTA, K.C.] [Chinese J. of Ch. Eng. Vol 16 n6; 2008] {9s}

Chin. J. Chem. Eng., Vol. 16, No. 6, December 2008 4

subject to { }51 0,10x +∈ , { }2 0 to 1x = , { }3 1,2x = ,

{ }4 1,2, ,6x = ⋅ ⋅ ⋅ , { }5 1,2, ,6x = ⋅ ⋅ ⋅ . The objective is minimization of average abso-

lute relative error (AARE) A on test set and X is a so-lution string representing a design configuration. The design variable x1 takes any values for C in the range of 0.0 to 10000. x2 represents the ε (epsilon) taking any values in the range of 0.0 to 1. x3 represents the loss function types: ε-insensitive loss function and Huber loss function represented by the numbers 1 and 2 respectively (Table 1). x4 represents the kernel types: all 6 kernels given in Table 2 represented by numbers 1 to 6 respectively. The variable x5 takes six values of the kernel parameters ( represents degree of polynomials, etc.) in the range 1 to 6. The total num-ber of design combinations with these variables is 100×100×2×6×6 = 720000. This means that if an ex-haustive search is to be performed it will take at the maximum 720000 function evaluations before arriving at the global minimum A for the test set ( assuming 100 trials for each to arrive optimum C and ε ). So the strategy which takes few function evaluations is the best one. For minimization of A as the objective func-tion, genetic algorithm technique is applied to find the optimum design configuration of SVR model.

Table 2 Different loss function

Case Name of loss function

Case 1 ε-insensitive loss function

Case 2 quadratic loss function

The stepwise procedure for the GA-based opti-mization of a SVR model is provided in the flowchart in Fig. 2.

5 CASE STUDY: PREDICTION OF PRESSURE DROP IN SOLID LIQUID SLURRY FLOW

5.1 Background and importance of pressure drop

Compared to a mechanical transport of slurries, the use of a pipeline ensures a dust free environment, demands substantially less space, makes possible full automation and requires a minimum of operating staff. Power consumption represents a substantial portion of the overall pipeline transport operational costs. For that reason great attention was paid to reduction of the hy-draulic losses. The prediction of pressure drop of slur-ries and the understanding of rhelogical behavior makes possible to optimize energy and water requirements.

Various investigations [15-19] have tried to show and propose correlation relating pressure drop with other parameters of solid-liquid flow namely solids density, liquid density, particle size, concentration, pipe diameter, viscosity of flowing media, velocity of suspension, etc. They find that pressure drop curve passes through minima (point 3 in Fig. 3) and operation at this velocity will ensure lowest power consumption.

Despite the large area of application, the avail-able models describing the suspension mechanism do not completely satisfy engineering needs. To facilitate the design and scale up of pipelines and slurry pumps, there is a need for a correlation that can predict slurry pressure drop over a wide range of operating condi-tions, physical properties and particle size distribu-tions. Industry needs quick and easily implementing solutions. The model derived from the first principle is no doubt the best solution. But in the scenario, where the basic principles for pressure drop modeling ac-counting all the interactions for slurry flow is absent, the numerical model may be promising to give some quick, easy solutions for slurry pressure drop prediction.

This paper presents a systematic approach using

Table 1 Different kernel type

Case Name of kernel Equation

Case 1 linear k uv′=

Case 2 polynomial 1( 1) pk uv′= + ; p1 is degree of polynomial

Case 3 Gaussian radial basis function21

( )( )2

u v u vpk e

′− −⎡ ⎤−⎢ ⎥⎣ ⎦= ;

p1 is width of radial basis function (σ)

Case 4 exponential radial basis function21

( )( )2

u v u vpk e

′− −⎡ ⎤− ⎢ ⎥

⎣ ⎦= ; p1 is width of exponential radial basis function (σ)

Case 5 spline 31 11 min( , ) min( , )2 6

z uv uv u v u v= + + − ;

prod( )k z= ;

Case 6 B spline 0z = ; for 0r = : 12*( 1)p +

[ ] [ ] 12( 1)1 1( 1) 2( 1) max(0, 1 )r prz z p u v p r += + − + − + + − ;

end prod( )k z= ;

p1 is degree of B spline

Note: u, v are kernel arguments

Page 6: Prediction of Pressure Drop of Slurry FLow in Pipeline by Hybrid (...) (Ej) [LAHIRI, S.K.; GHANTA, K.C.] [Chinese J. of Ch. Eng. Vol 16 n6; 2008] {9s}

Chin. J. Chem. Eng., Vol. 16, No. 6, December 2008 5

robust hybrid SVR-GA techniques to build a pressure drop correlation from available experimental data. This correlation has been derived from a broad ex-perimental data bank collected from the open litera-ture [refer to Table 3] (800 measurements covering a wide range of pipe dimensions, operating conditions and physical properties).

5.2 Development of the SVR-based correlation

5.2.1 Collection of data As mentioned earlier, over the years researchers

have amply quantified the pressure drop of slurry flow in pipeline. In this work, about 800 experimental points have been collected from 20 sources spanning the years 1942-2002 [refer to Table 3]. This wide range of database includes experimental information from different physical systems to provide a unified correlation for pressure drop. Table 4 suggests the wide range of the collected databank for pressure drop.

5.2.2 Identification of input parameters After extensive literature survey all physical pa-

rameters that influence pressure drop are put in a so-called ‘wish list’.

Out of the number of inputs in ‘wish list’, we used support vector regression to establish the best set of chosen inputs, which describes pressure drop. The following criteria guide the choice of the set of inputs:

The inputs should be as few as possible. Each input should be highly cross-correlated

to the output parameter; These inputs should be weakly

cross-correlated to each other; The selected input set should give the best

output prediction, which is checked by using statisti-cal analysis [e.g. average absolute relative error A, standard deviation].

While choosing the most expressive inputs, there is a compromise between the number of inputs and prediction. Based on different combinations of inputs, trial and error method was used to finalize the input set which gives reasonable low prediction error A when exposed to support vector regression.

Based on the above analysis, the input variables such as pipe diameter, particle diameter, solids con-centration, solid and liquid density and viscosity of liquid have been finalized to predict pressure drop in slurry pipeline. Table 4 shows the typical range of data selected for support vector regression.

6 RESULTS AND DISCUSSION

As the magnitude of inputs and outputs greatly differ from each other, they are normalized in -1 to +1 scale by the formula

Figure 2 Schematic for SVR algorithm implementation

Figure 3 Plot of transitional mixture velocity with pres-sure drop

Table 3 Literature sources for pressure drop correlations

No Author No Author

1 Wilson (1942) [15] 11 Gillies et al. (1983) [28]

2 Durand & Condolios (1952) [20] 12 Roco & Shook(1984) [29]

3 Newitt et al. (1955) [21] 13 Roco & Shook(1985) [30]

4 Zandi & Govatos (1967) [22] 14 Ma (1987) [31]

5 Shook et al.(1968) [23] 15 Hsu (1987) [17]

6 Schriek et al. (1973) [24] 16 Doron et al. (1987) [32]

7 Scarlett & Grimley (1974) [25] 17 Ghanta (1996) [18]

8 Turian & Yuan (1977) [26] 18 Gillies et al. (1999) [19]

9 Wasp et al. (1977) [16] 19 Schaan et al.(2000) [33]

10 Govier & Aziz (1982) [27] 20 Kaushal and Tomita(2002) [34]

Page 7: Prediction of Pressure Drop of Slurry FLow in Pipeline by Hybrid (...) (Ej) [LAHIRI, S.K.; GHANTA, K.C.] [Chinese J. of Ch. Eng. Vol 16 n6; 2008] {9s}

Chin. J. Chem. Eng., Vol. 16, No. 6, December 2008 6

max minnormal

max min

2x x xx

x x− −

=−

75% of total dataset was chosen randomly for training and rest 25% was selected for validation and testing.

Seven parameters were identified as input (Table 5)

for SVR and the pressure drop is put as target. These data then exposed to hybrid SVR-GA model described above. After optimization of five SVR parameters de-scribed above, the model output was summarized in Table 6. The parity plot for experimental and predicted pressure drop is shown in Fig. 4. The low A (12.7%) may be considered as an excellent prediction per-formance considering the poor understanding of slurry flow phenomena and large databank for training com-prising various systems. Fig. 5 represents a compari-tive study of present model and Kaushal model [34] against experimental data of Ghanta [31]. The figure clearly represents the improvement of model predic-tion as compared to Kaushal model especially at low velocity range. The optimum value of SVR meta-parameters were summarized in Table 7, clearly showing that there exists almost 4 different feasible solutions which lead to same prediction error.

Table 6 Prediction error by SVR based model

A σ R

training 0.125 0.163 0.978

testing 0.127 0.169 0.977

The final equation is as follows: Pressure drop ( , ) *K i j= +B b

Table 4 Slurry system① and parameter range from the literature data

Pipe diameter D/m

Particle diameter dp/m

Liquid density ρL/kg·m-3

Solids density ρS/kg·m-3

Liquid viscosity μ/mPa·s

Velocity U /m·s-1

Solids concentration (volume fraction) f

0.019-0.495 38.3-13000 1000-1250 1370-2844 0.12-4 0.86-4.81 0.014-0.333 ① Slurry system: coal/water, copper ore/water, sand/water, gypsum/water, glass/water, gravel/water.

Table 5 Typical input and output data for SVR training

Input D/ cm

dp/ μm

ρL/ kg·L-1

ρS/ kg·L-1

μ/ mPa·s

U/ m·s-1 f

Output dp/dz/Pa·m-1

2.54 40.1 1.00 2.84 0.85 1.06 0.07 560.00

2.54 40.1 1.00 2.84 0.85 1.13 0.07 600.00

2.54 40.1 1.00 2.84 0.85 1.15 0.07 600.00

1.90 40.1 1.00 2.84 0.85 1.10 0.07 1250.00

1.90 40.1 1.00 2.84 0.85 1.50 0.07 1500.00

1.90 40.1 1.00 2.84 0.85 1.70 0.07 1700.00

5.26 38.3 1.00 2.33 1.00 1.11 0.11 294.10

5.26 38.3 1.00 2.33 1.00 3.01 0.11 1651.30

5.26 38.3 1.00 2.33 1.00 4.81 0.11 3822.90

5.26 38.3 1.00 2.33 1.00 1.33 0.31 542.90

5.26 38.3 1.00 2.33 1.00 3.12 0.31 2352.60

5.26 38.3 1.00 2.33 1.00 4.70 0.31 4727.70

20.85 190.0 1.00 1.37 1.14 2.59 0.33 266.50

20.85 190.0 1.00 1.37 1.14 2.34 0.33 226.30

20.85 190.0 1.00 1.37 1.14 2.01 0.33 177.30

20.85 190.0 1.00 1.37 1.14 1.78 0.33 147.00

20.85 190.0 1.00 1.37 1.14 1.59 0.32 123.40

20.85 190.0 1.00 1.37 1.14 1.37 0.33 99.90

5.15 165.0 1.00 2.65 1.00 1.66 0.07 666.20

5.15 165.0 1.00 2.65 1.00 3.78 0.09 2449.20

5.15 165.0 1.00 2.65 1.00 1.66 0.17 901.30

5.15 165.0 1.00 2.65 1.00 4.17 0.19 3428.90

5.15 165.0 1.00 2.65 1.00 1.66 0.27 1136.40

5.15 165.0 1.00 2.65 1.00 4.33 0.29 4408.10

26.30 165.0 1.00 2.65 1.00 2.90 0.09 261.60

26.30 165.0 1.00 2.65 1.00 3.50 0.09 334.10

26.30 165.0 1.00 2.65 1.00 2.90 0.18 305.70

26.30 165.0 1.00 2.65 1.00 3.50 0.17 382.10

26.30 165.0 1.00 2.65 1.00 2.90 0.26 355.60

26.30 165.0 1.00 2.65 1.00 3.50 0.26 453.60

26.30 165.0 1.00 2.65 1.00 2.90 0.33 414.40

26.30 165.0 1.00 2.65 1.00 3.50 0.32 526.10

49.50 165.0 1.00 2.65 1.00 3.16 0.09 143.00

Table 5 (Continued)

Input D/cm

dp/ μm

ρL/ kg·L-1

ρS/ kg·L-1

μ/ mPa·s

U/ m·s-1 f

Output dp/dz/Pa·m-

1

49.50 165.0 1.00 2.65 1.00 3.76 0.09 186.10

49.50 165.0 1.00 2.65 1.00 3.07 0.17 157.70

49.50 165.0 1.00 2.65 1.00 3.76 0.26 254.70

15.85 190.0 1.00 2.65 1.30 2.50 0.14 475.20

15.85 190.0 1.00 2.65 1.30 2.50 0.29 630.90

15.85 190.0 1.00 2.65 0.12 3.00 0.13 648.90

15.85 190.0 1.00 2.65 0.12 2.90 0.28 866.70

5.07 520.0 1.00 2.65 1.00 1.90 0.09 1175.60

5.07 520.0 1.00 2.65 1.00 2.00 0.21 1763.40

4.00 580.0 1.25 2.27 4.00 2.88 0.16 3926.00

4.00 580.0 1.25 2.27 4.00 2.70 0.14 3580.00

4.00 580.0 1.25 2.27 4.00 2.01 0.10 2217.00

4.00 580.0 1.25 2.27 4.00 1.05 0.06 845.00

26.30 13000.0 1.00 2.65 1.00 3.20 0.04 842.50

26.30 13000.0 1.00 2.65 1.00 4.00 0.04 989.50

Page 8: Prediction of Pressure Drop of Slurry FLow in Pipeline by Hybrid (...) (Ej) [LAHIRI, S.K.; GHANTA, K.C.] [Chinese J. of Ch. Eng. Vol 16 n6; 2008] {9s}

Chin. J. Chem. Eng., Vol. 16, No. 6, December 2008 7

where K(i, j) is the kernel function ( ) ( )( ) ( ) ( )( )st rn st rn

21

*,7 ,7,7 ,72

tI T T Tj ji ipe

− −−

for 1i = to n and 1j = to m

and n = number of test record, m = number of train-ing record (600 in this case), Tst is a n×7 matrix for testing input (which user want to test for calculating pressure drop) and should be arrange in a sequence similar to Table 5, Trn is a 600×7 matrix for training input data which is arranged in a sequence similar to Table 5, B is a 600×1 matrix calculated during training

and b is zero for this case, p1 is width of rbfs (which can be found as kernel parameter in Table 7)#.

In a separate study, we exposed the same dataset to SVR algorithm only (without the GA algorithm) and try to optimize the different parameters based on exhaustive search. We found that it was not possible to reach the best solutions starting from arbitrary initial conditions. Especially the optimum choice of C and ε is very difficult to arrive after starting with some dis-crete value. Many times the solutions got stuck up in sub optimal local minima. These experiments justified the use of a hybrid technique for SVR parameter tun-ing. The best prediction after exhaustive search along with SVR parameters was summarized in Table 8. From the table it is clear that even after 720000 runs, the SVR algorithm is unable to locate the global minima and the time of execution is 4 h in Pentium 4 processor. On the other hand, the hybrid SVR-GA technique is able to locate the global minima with 2000 runs within 1 h. The prediction accuracy is also much better. Moreover it relieves the non expert users to choose the different parameters and find optimum SVR meta-parameters with a good accuracy.

All the 800 experimental data collected from open literature was also exposed to different formulas and correlations for pressure drop available in open literature and AARE were calculated for each of them (Table 9). From Table 9, it is evident that prediction error of pres-sure drop has reduced considerably in the present work.

Figure 4 Experimental vs predicted pressure drop for exponential radial basis function (erbf) kernel

Figure 5 Comparison of SVR-GA model, Kaushal model and experimental pressure drop [31] —— experimental; SVR-GA model; Kaushal [34]

Table 7 Optimum parameters obtained by hybrid SVR- GA algorithm

No C ε Kernel type Type of loss function Kernel parameter A

1 5043.82 0.50 erbf ε-insensitive 2 0.127

2 6698.16 0.36 erbf ε-insensitive 3 0.127

3 7954.35 0.48 erbf ε-insensitive 4 0.127

4 8380.98 0.54 erbf ε-insensitive 5 0.127

5 9005.8 0.59 erbf ε-insensitive 6 0.127

Table 8 Comparison of performance of SVR-GA hybrid model Vs SVR model

Performance criteria

A σ R Execution time/h

prediction performance by hybrid SVR-GA model 0.127 0.169 0.977 1

prediction performance by SVR model without GA tuning after exhaustive search 0.132 0.170 0.975 4

Page 9: Prediction of Pressure Drop of Slurry FLow in Pipeline by Hybrid (...) (Ej) [LAHIRI, S.K.; GHANTA, K.C.] [Chinese J. of Ch. Eng. Vol 16 n6; 2008] {9s}

Chin. J. Chem. Eng., Vol. 16, No. 6, December 2008 8

Table 9 Performance of different correlations to predict pressure drop

No Author A/%

1 Wilson (1942) [15] 49.51

2 Durand & Condolios (1952) [16] 36.53

3 Newitt et al. (1955) [17] 93.43

4 Zandi & Govatos (1967) [18] 50.02

5 Shook et al. (1968) [19] 34.50

6 Turian & Yuan (1977) [22] 39.97

7 Wasp et al. (1977) [23] 26.68

8 Gillies et al. (1999) [32] 22.31

9 Kaushal & Tomita (2002) [34] 22.01

10 present work 12.70

7 CONCLUSIONS

Support Vector Machines regression methodol-ogy with a robust parameter tuning procedure has been described in this work. The method employs a hybrid SVR- GA approach for minimizing the gener-alization error. Superior prediction performances were obtained for the case study of pressure drop and a comparison with selected correlations in the literature showed that the developed SVR correlation noticeably improved prediction of pressure drop over a wide range of operating conditions, physical properties, and pipe diameters. The proposed hybrid technique ( SVR-GA) also relieve the non-expert users to choose the meta-parameters of SVR algorithm for his case study and find out optimum value of these meta-parameters on its own. The results indicate that SVR based technique with the GA based parameters tuning approach described in this work can yield ex-cellent generalization and can be advantageously em-ployed for a large class of regression problems en-countered in process engineering.

NOMENCLATURE A average absolute relative error α, α* vectors of Lagrange’s multiplier ε precision parameter λ regularization constant ξ,ξ* slack variables σ width of kernel of radial basis function

REFERENCES

1 Vapnik, V., Golowich, S., Smola, A., “Support vector method for function approximation regression estimation and signal processing”, Adv. Neural Inform. Proces. Syst., 9, 281-287 (1996).

2 Vapnik, V., The Nature of Statistical Learning Theory, Springer Ver-lag, New York (1995).

3 Vapnik, V., Statistical Learning Theory, John Wiley, New York (1998). 4 Jack, L.B., Nandi, A.K., “Fault detection using support vector ma-

chines and artificial neural networks augmented by genetic algo-rithms”, Mech. Sys. Sig. Proc., 16, 372-390 (2002).

5 Agarwal, M., Jade, A.M., Jayaraman, V.K., Kulkarni, B.D., “Supprot vector machines: A useful tool for process engineering applications”, Chem. Eng. Prog, 23, 57-62 (2003).

6 Edgar, T.F., Himmelblau, D.M., Optimization of Chemical Processes,

McGraw-Hill, Singapore (1989). 7 Cartwright, H.M., Long, R.A., “Simultaneous optimization of

chemical flowshop sequencing and topology using genetic algo-rithms”, Ind. Eng. Chem. Res., 32, 2706-2711 (1993).

8 Garrard, A., Fraga, E.S., “Mass exchange network synthesis using genetic algorithms”, Comput. Chem. Eng., 22, 1837-1839 (1988).

9 Goldberg, D.E., Genetic Algorithms in Search Optimization and Machine Learning, Addison–Wesley, New York (1989).

10 Hanagandi, V., Ploehn H., Nikolaou, M., “Solution of the self con-sistent field model for polymer adsorption by genetic algorithms”, Chem. Eng. Sci., 51, 1071-1077 (1996).

11 Polifke, W., Geng, W., Dobbeling, K., “Optimization of rate coeffi-cients for simplified reaction mechanisms with genetic algorithms”, Combust. Flame, 113, 119-121 (1988).

12 Cherkassky, V., Mulier, F., Learning from Data: Concepts Theory and Methods, John Wiley & Sons (1998).

13 Cherkassky, V, Shao, X., Mulier, F., Vapnik, V., “Model complexity control for regression using VC generalization bounds”, IEEE Transaction on Neural Networks, 10 (5), 1075-1089 (1999)

14 Babu, B.V., Sastry, K.K.N., “Estimation of heat transfer parameters in a trickle-bed reactor using differential evolution and orthogonal collocation”, Comput. Chem. Eng., 23, 327-339 (1999).

15 Wilson, W.E., “Mechanics of flow with non colloïdal inert solids”, Trans. ASCE, 107, 1576-1594 (1942).

16 Durand, R., Condolios, E., “Experimental investigation of the trans-port of solids in pipes”, Journal of Hydrauliques Computation, 10, 29-55 (1952).

17 Newitt, D.M., Richardson, J.F., Abbott, M., Turtle, R.B., "Hydraulic conveying of solids in horizontal pipes”, Trans. Instn Chem. Engrs, 33, 93-113 (1955).

18 Zandi, I., Govatos, G., “Heterogeneous flow of solids in pipelines”, In: Proc. ACSE J. Hydraul. Div., 93(HY3), 145-159 (1967).

19 Shook, C.A., Daniel, S.M., Scott, J.A., Holgate, J.P., “ Flow of sus-pensions in pipelines, two mechanism of particle suspension”, Can J. Chem. Eng., 46, 238-244 (1968).

20 Schriek, W., Smith, L.G., Haas, D.B., Husband, W.H.W., “Experi-mental studies on the hydraulic transport of coal”, Report E73-17, Saskatchewan business magazine, Saskatchewan Research Council, Saskatoon, Canada (1973).

21 Scarlett, B., Grimley, A., “Particle velocity and concentration pro-files during hydraulic transport in a circular pipe”, In: Proc. Hydro-transport 3, BHRA-Fluid Engineering, Cranfield,UK., paper D3, 23 (1974).

22 Turian, R.M., Yuan, T.F, “Flow of slurries in pipelines”, AIChE J., 23, 232-243 (1977).

23 Wasp, E.J, Kenny, J.P., Gandhi, R.L., Solid Liquid Flow -Slurry Pipeline Transportation, Trans. Tech. Publ., Rockport, Mass., USA (1977).

24 Govier, G.W., Aziz, K., The Flow of Complex Mixtures in Pipes, Krieger Publication, Malabar, FL (1982).

25 Gillies, R., Husband, W.H.W., Small, M., Shook, C.A., “Concentra-tion distribution effects in a fine particle slurry”, In: Proc. 8th Tech-nical Conference, Slurry Transport Association, 131-142 (1983).

26 Roco, M.C., Shook, C.A., “Computational methods for coal slurry pipeline with heterogeneous size distribution”, Powder Technol., 39, 159-176 (1984).

27 Roco, M.C., Shook C.A., “Turbulent flow of incompressible mix-tures”, J. Fluids Eng., 107, 224-231 (1985).

28 Ma, T.W, “Stability, rheology and flow in pipes, bends, fittings, valves and venturimeters of concentrated non-Newtonian suspen-sions”, PhD Thesis, University of Illinois, Chaicago, (1987).

29 Hsu, F.L, “Flow of non-colloidal slurries in pipelines: Experiments and numerical simulations”, PhD Thesis, University of Illinois, Chaicago (1987).

30 Doron, P., Granica, D., Barnea, D., “Slurry flow in horizontal pipes - experimental and modeling”, Int. J. Multiphase Flow, 13, 535-547 (1987).

31 Ghanta, K.C, “Studies on rhelogical and transport characteristic of solid liquid suspension in pipeline”, PhD Thesis, IIT, Kharagpur, In-dia (1996).

32 Gillies, R.G., Hill, K.B., Mckibben, M.J., Shook, C.A., “Solids transport by laminar Newtonian flows”, Powder Technol., 104, 269–277 (1999).

33 Schaan, J., Robert, J.S., Gillies, R.G, Shook, C.A., “The effect of particle shape on pipeline friction for newtonian slurries of fine par-ticles”, Can. J. Chem. Eng., 78, 717-725 (2000).

34 Kaushal, D.R., Tomita, Y., “Solids concentration profiles and critical velocity in pipeline flow of multisized particulate slurries”, Int. J. Multiphase Flow, 28, 1697-1717 (2002).