7
Open Journal of Modelling and Simulation, 2015, 3, 26-31 Published Online January 2015 in SciRes. http://www.scirp.org/journal/ojmsi http://dx.doi.org/10.4236/ojmsi.2015.31003 How to cite this paper: El-Desouky, B.S., Shiha, F.A. and Magar, A.M. (2015) Extension of Generalized Bernoulli Learning Models. Open Journal of Modelling and Simulation, 3, 26-31. http://dx.doi.org/10.4236/ojmsi.2015.31003 Extension of Generalized Bernoulli Learning Models B. S. El-Desouky, F. A. Shiha, A. M. Magar Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, Egypt Email: [email protected] , [email protected] , [email protected] Received 31 December 2014; accepted 13 January 2015; published 14 January 2015 Academic Editor: Antonio Hervás Jorge, Department of Applied Mathematics, Universidad Politécnica de Valencia, Spain Copyright © 2015 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/ Abstract In this article, we study the generalized Bernoulli learning model based on the probability of suc- cess = α i i p n where 12 , i n = ,, 1 2 0< < < < n n α α α and n is positive integer. This gives the previous results given by Abdulnasser and Khidr [1], Rashad [2] and EL-Desouky and Mahfouz [3] as special cases, where i p in = 2 2 i p i n = and p p i p i n = respectively. The probability function ( ) n PW k = of this model is derived, some properties of the model are obtained and the limiting distribution of the model is given. Keywords Stirling Numbers, Bernoulli Learning Models, Comtet Numbers, Inclusion-Exclusion Principle 1. Introduction In industry, training programmes are conducted with the aim of training new workers to do particular job repeatedly every day. It is assumed that a particular trainee will show progress proportional to the number of days he attends the program, otherwise his ability will be different from one day to another, see [1] [4]. Let n be the length of a programme in days and l the number of repetitions of the job per day a trainee has to do. If a trainee is responding to the instructions, it would be reasonable to assume the probability that he will do a single job right, i.e. the probability of success on the th i day is i p in = , see Abdulnasser and Khidr [1], and hence the probability that he will do x jobs correctly out of l jobs on the th i day is l x ( ) x in

Extension of Generalized Bernoulli Learning Modelsfile.scirp.org/pdf/OJMSi_2015011410470765.pdf · 3. Limiting Distribution of the Bernoulli Learning Model In this section we study

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Extension of Generalized Bernoulli Learning Modelsfile.scirp.org/pdf/OJMSi_2015011410470765.pdf · 3. Limiting Distribution of the Bernoulli Learning Model In this section we study

Open Journal of Modelling and Simulation, 2015, 3, 26-31 Published Online January 2015 in SciRes. http://www.scirp.org/journal/ojmsi http://dx.doi.org/10.4236/ojmsi.2015.31003

How to cite this paper: El-Desouky, B.S., Shiha, F.A. and Magar, A.M. (2015) Extension of Generalized Bernoulli Learning Models. Open Journal of Modelling and Simulation, 3, 26-31. http://dx.doi.org/10.4236/ojmsi.2015.31003

Extension of Generalized Bernoulli Learning Models B. S. El-Desouky, F. A. Shiha, A. M. Magar Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, Egypt Email: [email protected], [email protected], [email protected] Received 31 December 2014; accepted 13 January 2015; published 14 January 2015 Academic Editor: Antonio Hervás Jorge, Department of Applied Mathematics, Universidad Politécnica de Valencia, Spain

Copyright © 2015 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/

Abstract In this article, we study the generalized Bernoulli learning model based on the probability of suc-cess =αi ip n where 1 2 ,i n= ,, 1 20 < < < < n n≤α α α and n is positive integer. This gives the previous results given by Abdulnasser and Khidr [1], Rashad [2] and EL-Desouky and Mahfouz [3] as special cases, where ip i n= 2 2

ip i n= and p pip i n= respectively. The probability

function ( )nP W k= of this model is derived, some properties of the model are obtained and the limiting distribution of the model is given.

Keywords Stirling Numbers, Bernoulli Learning Models, Comtet Numbers, Inclusion-Exclusion Principle

1. Introduction In industry, training programmes are conducted with the aim of training new workers to do particular job repeatedly every day. It is assumed that a particular trainee will show progress proportional to the number of days he attends the program, otherwise his ability will be different from one day to another, see [1] [4].

Let n be the length of a programme in days and l the number of repetitions of the job per day a trainee has to do. If a trainee is responding to the instructions, it would be reasonable to assume the probability that he will do a single job right, i.e. the probability of success on the thi day is ip i n= , see Abdulnasser and Khidr [1],

and hence the probability that he will do x jobs correctly out of l jobs on the thi day is lx

( )xi n

Page 2: Extension of Generalized Bernoulli Learning Modelsfile.scirp.org/pdf/OJMSi_2015011410470765.pdf · 3. Limiting Distribution of the Bernoulli Learning Model In this section we study

B. S. El-Desouky et al.

27

( )11 xi n −− , 0,1, ,x l= and 0,1, ,i n= . When a trainee is not responding to the instructions, ip will be a constant p , 0 1.p< < To test whether a

trainee is responding or not, we test if ip is varying or sustaining a constant value .p This can be done by computing the total number of jobs that have been done correctly over the whole period of the program.

Let ,ln iX stand for the number of jobs done correctly out of l jobs on thi day, 1, 2, , ,i n= 1, 2,l =

and 1nl

n iW=

= ∑ ,ln iX , l

nl W nl≤ ≤ . In case ip p= , 0 1p< < , the distribution of lnW will be ( ),B nl p .

In this article, we study a generalization of Bernoulli learning model based on probability of success i ip nα= where n positive integer, iα are real numbers, 1, 2, , ,i n= and 1 20 n nα α α< < < < ≤ and

n is positive integer. This gives the previous results given in [1]-[3] as special cases, where ip i n= 2 2

ip i n= and p pip i n= respectively. In Section 2, the probability function ( )nP W k= of this model and

some properties of the model are obtained. In Section 3, we derive the limiting distribution of the model. Finally, in Section 4, we discuss some special cases.

2. The Generalized Bernoulli Learning Model Theorem 1. The distribution function of nW is

( ) ( ) ( )1, 11 ,

nkn n

m k

s n n m mP W k

kmα

=

+ + − = = −

∑ (1)

where 1,

1,

n

n n ii

W X=

= ∑ ( )1, 1,n i iX B nα≈ .

Proof. To derive the distribution of Bernoulli learning model based on the sum of the independent random

variable { }1, 1

,n

n i iX

= 1, 2, ,i n= ,

where the probability of success is i ip nα= we define the event iE as the event 1,n iX , 1, 2, ,i n= see [5],

and the sum

( ) ( )

( )

( ) ( )

1 21

1 21

1 21 2

1 1

1

1 1 1, , ,

1

1 1

, , , ,

1, 1, , 1

1

1 1, 1 ,

kk

kk

kk

k k

i i ii i n

n i n i n ii i n

ii ii i ik

i i n i i n

k k

P n k P E E E

P X X X

n n n n

s n n k nα

αα αα α α

≤ < < ≤

≤ < < ≤

≤ < < ≤ ≤ < < ≤

=

= = = =

= =

= − + + −

∑ ∑

where ( ),s n kα the generalized Stirling number of the first kind (Comtet numbers), defined by Comtet in [6] [7] as follows

( )( ) ( ) ( )0 1 10

,n

kn

kx x x s n k xαα α α −

=

− − − = ∑

where ( )0 1 1, , , nα α α α −= , for more details, see [8] and [9]. Employing the inclusion-exclusion principle, see [5], we get

( ) ( ) ( )

( ) ( ) ( )

( ) ( )

11 ,

1

1 1, 11 1

1

1, 1 1 1 ,

1

n m kn

m k

mn m km

m k

nkm

m k

mP W k P n m

k

s n n mmk n

s n n m mkn

α

α

=

=

=

− ≥ = − −

− + + −− = − −

+ + − − = − −

then

Page 3: Extension of Generalized Bernoulli Learning Modelsfile.scirp.org/pdf/OJMSi_2015011410470765.pdf · 3. Limiting Distribution of the Bernoulli Learning Model In this section we study

B. S. El-Desouky et al.

28

( ) ( ) ( )1

1

1, 1 11 1 ,

nkn m

m k

s n n m mP W k

knα+

= +

+ + − − ≥ + = −

hence ( ) ( ) ( )1n n nP W k P W k P W k= = ≥ − ≥ +

( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

( )

1

1

1

1

1, 1 1, 11 11 1

1

1, 1 1, 11 1 1 1

1 1

1

n nk kn m m

m k m k

nk kk m

m k

k

m k

s n n m s n n mm mP W k

k kn ns n n k s n n mk m

k kn n

α α

α α

+

= = +

= +

= +

+ + − + + −− − = = − − − −

+ + − + + −− − = − + − − −

+ −

∑ ∑

( )

( ) ( ) ( ) ( )

( ) ( ) ( ) ( )1

1

1, 1 1

1, 1 1, 1 1 1 1 1

1

1, 1 1, 1 1 1 ,

n

m

nk kk m

m k

nk kk m

m k

s n n m mkn

s n n k s n n m m mk kn n

s n n k s n n m mkn n

α

α α

α α

= +

= +

+ + − −

+ + − + + − − − = − + − + − + + − + + −

= − + −

this yields (1). Lemma 1.

( )1

,n

ni

nWi

E Wnα

µ=

= = ∑ (2)

( )1 2

1 2

2

21 1 1

1Var 2 .n n

n i i i ii i n i i

W nn

α α α α≤ < ≤ = =

= + − ∑ ∑ ∑ (3)

Proof. Consider the pair of inverse relation, see [10]

( ), 1 .m k

k m k mm k m k

m ma b b a

k k+

= =

= = −

∑ ∑ (4)

Then using (1), let

( ) ( ) ( )1, 11 .k

n mkm k

s n n kmg P W k

k nα

=

+ + − = = = −

Hence from (4), we get

( ) ( )1, 1 1 ,kkm

m k

ms n n k n g

kα−

=

+ + − = −

∑ (5)

and setting 1k = , we have

( ) ( )11, .n

nmm k

s n n n mg E Wα−

=

+ = − = −∑ (6)

But we have, see [7]

( ) ( )1 2

1 21, 1 .

n kn k

n ki i i

i i i ns n kα α α α

−−

≤ < < < ≤

= − ∑

(7)

Thus ( )1

1,n

ii

s n nα α=

+ = −∑ and this yields (2).

If putting 2k = in (5), we get

( ) ( ) ( )( ) ( )22 2

2 2 2 2

1 1 1 1 11, 1 ,2 2 2 2 2 2

n n n n

n nm m m mm m m m

m mms n n n g g m g mg E W E Wα

= = = =

− + − = = = − = −

∑ ∑ ∑ ∑

Page 4: Extension of Generalized Bernoulli Learning Modelsfile.scirp.org/pdf/OJMSi_2015011410470765.pdf · 3. Limiting Distribution of the Bernoulli Learning Model In this section we study

B. S. El-Desouky et al.

29

using (7), we have ( )1 2

1 211, 1 i i

i i ns n nα α α

≤ < ≤

+ − = ∑ , then

( )( ) 1 21 2

2 2

1 12 ,

ni

n i ii i n i

E W nnα

α α−

≤ < ≤ =

= +∑ ∑

hence

( ) ( )( ) ( )( )22Var n n nW E W E W= −

( )1 2

1 2

22

1 1 1Var 2 ,

n ni i

n i ii i n i i

W nn nα α

α α−

≤ < ≤ = =

= + −

∑ ∑ ∑

this yields (3).

3. Limiting Distribution of the Bernoulli Learning Model In this section we study the limiting distribution of the Bernoulli learning model based on the probability with success .i nα

Theorem 2. Let 1,

1

n

n n ii

W X=

= ∑ where ( )1, 1,n i iX B nα≈ and 1

,n iX are independent random variables. Then

( )2lim exp 2nZn

M t→∞

= where n

n

n Wn

W

WZ

µσ−

= i.e. nZ is ( )0,1N as .n →∞

Proof. The moment generating function of nZ is

( ) ( )e ,Wn

Wnn n W n nn

Wn

t

Z W W WM t M M tµ

σµ

σ

σ

−= =

and the moment generating function of nW is

( ) ( ) 1,

1e , , hencen Wn

n n

nW tW W n n i

iM t E W Xσσ

=

= = ∑

( ) ( )111

01 1 1

1e e 1 e , thenn i W W Wn n n

n n

x xn n nX t xt ti i i iW W

xi i i

nM t E

x n n n nσ σ σα α α α

σ−

== = =

− = = − = + ∑∏ ∏ ∏,

( ) ( )1

e 1 e 1 ,Wn

Wn nn

t nt wi

Zi

M tn

µ

σ σα −

=

= + −

therefore, we have

( ) ( ) ( ) ( )

( ) ( )

1

1 1 1

1

1 1 1

1

1ln ln 1 e 1 e 1

1

!

1 2!

Wn nnn n

n n

nn

n

n

n n

k kn ntW W ti iZ W

i i kW W

kjk kn WW i

i k jW

nW i

iW W

t tM t

n k n

ttk n j

t t tn

σµ µα ασ

σ σ

σµ ασ

µ ασ σ

−∞

= = =

−∞ ∞

= = =

=

− − − = + + − = + −

− − = +

−= + +

∑ ∑∑

∑∑ ∑

( )

222 3 2 3

2 3 2 2 31

22

2 21 1

1 1 13! 2! 3!2

1 1 ,2!

nn n n n

n

n n n

ni

i WW W W W

n nW i i i

i iW W W

t t t tn

t t t O nn n n

ασσ σ σ σ

µ α α ασ σ σ

=

= =

+ + − + + + +

= + + − +

∑ ∑

Page 5: Extension of Generalized Bernoulli Learning Modelsfile.scirp.org/pdf/OJMSi_2015011410470765.pdf · 3. Limiting Distribution of the Bernoulli Learning Model In this section we study

B. S. El-Desouky et al.

30

by using (2) and (3), we obtain

( ) ( )2

ln 1 , hence2nZtM t O n= +

( ) ( )2lim exp 2nZn

M t t→∞

≅ (8)

which is the moment generating function of standard normal distribution ( )0,1 .N

4. Some Special Cases In this section we discuss some special cases as follows.

i) Setting the probability of successes iipn

= we have the results derived in [1], as special case

Theorem 3. The distribution of 1nW is given by [1]

( ) ( ) ( )1 1, 11 , 1, 2, , ,

nkn m

m k

s n n m mP W k k n

kn=

+ + − = = − =

∑ (9)

where ( ),s n k are the usual stirling numbers of the first kind, see [10]. Also, they obtained the limiting distribution of learning model, mean and variance as follows.

Theorem 4. Let 1

1,

n

n ii

W X=

= ∑ where ( )1,iX B i n≈ and X ’s are independent random variables. Then

2 2lim en

tZn

M→∞

= where n

n

n Wn

W

WZ

µσ−

= i.e. nZ has ( )0,1N as .n →∞

Lemma 2.

1 1

221 1, .

2 6n nW W

n nn

µ σ+ −= = (10)

ii) Setting the probability of successes 2

iipn

=

we have the results derived in [2], as special case

Theorem 5. The distribution of 1nW is given by [2]

( ) ( )( )

( ) ( )( )2 1

112

0

11 1, 1,2 1 .n knn k

n mm k l

mP W k s n l s n n k l

k n

+ −+ +

= =

= = − + + + − −

∑ ∑ (11)

Lemma 3. ( )( ) 4

23

1 2 1 4 1, and .6 30n nW W

n n nn n

µ σ+ + +

= =

iii) Setting the probability of successes p

iipn

=

we have the results derived in [3], as special case

Theorem 6.

( ) ( ) ( )1, 11 ,

nk pn pm

m k

s n n mmP W k

k n=

+ + − = = −

∑ (12)

where 1,1

nn n iiW X

== ∑ , ( )1

, 1, p pn iX B i n≈ and ( ), ,ps n k p-Stirling numbers, see [11] [12].

Theorem 7. Let 1 1,

1

n

n n ii

W X=

= ∑ where ( )1, 1, p p

n iX B i n≈ and 1,n iX are independent random variables.

Then 2 2lim e

n

tZn

M→∞

= where n

n

n Wn

W

WZ

µσ−

= i.e. nZ has ( )0,1N as .n →∞

Page 6: Extension of Generalized Bernoulli Learning Modelsfile.scirp.org/pdf/OJMSi_2015011410470765.pdf · 3. Limiting Distribution of the Bernoulli Learning Model In this section we study

B. S. El-Desouky et al.

31

Lemma 4.

( )1

, andn

pn

W n pi

iE Wn

µ=

= = ∑

( )1 2

2

1 221 1 1

1Var 2 .n n

p p p p pn p

i i n i iW i i n i i

n ≤ < ≤ = =

= + − ∑ ∑ ∑

5. Conclusion Our main goal of this work is concerned with studying the extension of generalized Bernoulli learning model with probability of success i ip nα= 1, 2, , ,i n= 1 20 n nα α α< < < < ≤ and n is positive integer. Some previous results, see [1]-[3], are concluded as special cases of our result, that is for ip i n= 2 2

ip i n= and p p

ip i n= respectively. The mean and variance of the model are obtained. Finally, the limiting distri- bution of the general model is derived. This model has many applications in industry, specially for training pro- grammes.

References [1] Abdulnasser, T. and Khidr, A.M. (1981) On the Distribution of a Sum of Non-Identical Independent Binomials. Jour-

nal of University of Kuwait, 8, 109-115. [2] Rashad, A.M. (1998) On the Sum of Non-Identical Random Variables. M.Sc. Thesis, Faculty of Science, Aswan. [3] EL-Desouky, B.S. and Mahfouz, K.M. (2012) A Generalization of Bernolli Learning Models. Kuwait Journal of

Science and Engineering, 39, 31-44. [4] Janardan, K.G. (1997) Bernoulli Learing Models: Uppulur numbers. In: Balakrishnan, N., Ed., Advances in Combina-

torial Methods and Applications to Probability and Statistics, Birkhäuser. Statistics for Industry and Technology, Bos-ton, 471-480. http://dx.doi.org/10.1007/978-1-4612-4140-9_29

[5] Parzen, E. (1960) Modern Probability Theory and Its Applications. John Wiley & Sons, Hoboken. [6] Comtet, M.L. (1972) Nombers de Stirling generoux et functions symetriques. C. R. Acad. Sci. Paris, Ser. A, 275,

747-750. [7] Comtet, M.L. (1974) Advanced Combinatorics. Reidel, Dordrecht-Holland.

http://dx.doi.org/10.1007/978-94-010-2196-8 [8] EL-Desouky, B.S. and Cakić, N.P. (2011) Generalized Higher Order Stirling Numbers. Mathematical and Computer

Modelling, 54, 2848-2857. http://dx.doi.org/10.1016/j.mcm.2011.07.005 [9] EL-Desouky, B.S., Cakić, N.P. and Mansour, T. (2010) Modified Approach to Generalized Stirling Numbers via Dif-

ferential Operators. Applied Mathematics Letters, 23, 115-120. http://dx.doi.org/10.1016/j.aml.2009.08.018 [10] Riordan, J. (1968) Combinatorial Identities. John Wiley & Sons, New York. London, and Sydney. [11] Beardon, A.F. (1996) Sums of Powers of Integers. The American Mathematical Monthly, 103, 201-213.

http://dx.doi.org/10.1016/j.aml.2009.08.018 [12] Sun, Y. (2006) Two Classes of p-Stirling Numbers. Discrete Mathematics, 306, 2801-2805.

http://dx.doi.org/10.1016/j.disc.2006.05.016