BAYESIAN INFERENCE IN EXPONENTIAL AND PARETO POPULATIONS IN THE PRESENCE OF OUTLIERS

THESIS SUBMITTED TO THE

COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY FOR THE DEGREE OF

DOCTOR OF PHILOSOPHY

UNDER THE FACULTY OF SCIENCE

By

JEEVANAND E. s.

DEPARTMENT OF MATHEMATICS AND STATISTICS COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY

COCHIN - 682 022 DECEMBER 1993

### Outliers" is a bonafide record of work done by

Sri . E. S. J eevanand under my guidance in the Department of

Mathematics and Statistics, Cochin University of Science and Technology and that no part of it has been incl uded anywhere previously for the award of any degree or title.

YQ3C£&A§QQ%@bm¢LM,

Cochin University of Science Dr.N.Uﬁ§ikrishnan Naif‘

### and Technology Professor of Statistics

December 20,'93 :

This thesis contains no material which has been accepted for 'the ‘award. of any’ other Degree or Diploma in. ‘any lkdversity and to the best of my knowldge and tmdief, it

### contains no material previously published by any other

pe rson, except where due references are made in the text of

(JEEVANAND.E.SD

the thesis.

COC hi 1': 2.3

December 2O,’93

### Iwtn, guidance and a gwem. to me ., wwfwm

wfuloh 6* awwfd (wt. Iuwe 6ee/n a,6£e w ﬁixvi/91¢ um wa-mé. 6*‘

am than/&4€wC to M’:/e 9€ead 6'; the .Dena/wnwnt, ¢e<we"z,e/z/.> and

weoea/wh oofwia/w wulz. wlwm. 6‘ J’:/awe had v/oezﬁui

d4'»oc'u/Jown di/wctiy 01:. 4'/rzdx'/w<>¢Cy /w€<z,uLng w dub

we-wk. Tim 4’ue<?{:. and c<r--a'{z0za,¢#L<s'n. esutended by me, 63; the aﬁﬁce <>¢a4E}3 eﬁ Mae d,e{:,a/wrrwmt and the oc;43en/¢4'/0/to aﬁ Hie .0.7?..Z>® twofuln. ‘limlve/e.o<',ty canwzute/L com‘/me. and my

Pulend/:> a/us gvca,¢o{3u,€£1,¢ o.o!:,rw'w€eafged .

my (m/Mao fmue éeen. my gaeax niita/w 0'1! »:>¢/z»e,rzgt/h w-fwoe carve tatmt <>'u4rz,{w/2,6 fw./9 {cad me <2’/w,w Mm we/ck. ta cm. and . 6" Moe Umxwk my /Glodrlzfoe/'1. {low alt ehe fvetn he

Ivan given. £011, the ouccezw 635 my '¢-e>:>ea/ooh. 4.0¢’o»3¢..

\9°‘¢'/nafffy 9 wéolb iv {Maze an /oeca/ed my gnabémde w Uw Qinwe/wzley Iﬁaazwo Kamnulooéan £0/2, awe/ate‘/1.9 me

,j'"o"<.c,ru'.o'/1, 3='e££a'u»:>I¢£n .

(JEEVQNQND. E. S)

Chapter I

1.1

1.8 1.3 1.4

Chapter II

8.1 8.8 8.3 8.4 8.5

Chapter III

3.1 3.8 3.3

PRELIMINARIES

Introduction Data generating Bayes inference

OO

### models ..

CO

### The present work ..

ESTIMATION OF PARETO PARAMETERS ..

Introduction

The model

Estimation with parameter

Estimation with parameter

Discussion

ESTIMATION WHEN

Introduction

Estimation when Estimation when

known scale unknown scale

b IS UNKNOWN ..

### O(b(1 ..

OO### b>1 ..

4.1 4.8 4.3 4.4 4.5

Chapter V 5.1

5.8 5.3 5.4 5.5

Chapter VI

6.1 6.8 6.3

### FUNCTION ..

### Introduction ..

### The Exchangeable model ..

### The Identifiable model ..

### The Censored model ..

### Discussion ..

PREDICTION BOUNDS FOR THE PARETO

### ORDER STATISTICS ..

### Introduction ..

### The model ..

Prediction interval with

### known b . .

Prediction interval with

### unknown b . .

### Discussion ..

ESTIMATION OF EXPONENTIAL PARAMETERS IN THE PRESENCE

### OF k—OUTLIERS ..

### Introduction ..

### The Model . .

### Estimation when 0<b(1 ..

77 77 78 82 84 87

97 97 98 101 106 117

181 121

122 184

6.5 6.6

Chapt er VI I 7.1

7.2 7.3 7.4 7.5

Chapter VIII 8.1

8.8 8.3 8.4 8.5 8.6

REFERENCES

APPENDIX

Determination of the number

of outliers

### Discussion ..

PREDICTION INTERVAL FOR ORDER

CO

STATISTICS IN EXPONENTIAL SAMPLE ..

Introduction

The*model

Prediction interval with

known b

Prediction interval with

unknown b

Discussion

ESTIMATION OF PlX>Y] FROM EXPONENTIAL SAMPLES

Introduction

The model

The Exchangeable model The Identifiable model

The Censored model Discussion

COMPUTER PROGRAMS

CO

CO

O9

DO

QC

GO

CO

QC

OO

CO

.0

143 147

156 156 157 159 163 173

178 178 179 180 189 191 193 106 209

### The origin of the concept of outliers in

statistical data can be traced to the concern manifested by analysts in seemingly unrepresentative observations in a collection and the problems such observations created in the understanding of the real world phenomenon. the data was supposed to provide. An outlying observaton or outlier is one that appears to deviate markedly from the other members of the sample in which it occurs (Grubbs. (19693). Thus the

### rel i abi l i ty of the observati on i s re-fl ected by i ts

relationship with the other members of the sample and as such, a decision on whether an observation is an outlier or### not is essentially subjective. The literature on outliers

is voluminous on its own and moreover shares many results

### with. other areas of statistics like robust procedures,

mixture models. slippage problems and data analysis. A### detailed review material covering various aspects of

### statistical analysis in the presence of outliers is

avai l abl e i n Anscombeﬁ 1 Q60) , Gr ubbs C 1 969) . St i gl er

C1973.198O), Barnett C1978), Kale C1979), Hawkins C1980), Barnett and Lewis (1984) and Gather and KaleC1998). In view

of this,.in the present study, only the basic concepts with the general framework required to develop the results in the subsequent chapters will be outlined.

There are three basic reasons. for the emergence

### of outliers identified in literature as, global model weakness that often requires a change in the initially

assumed model to a new one for the entire sample. local model weakness that applies only to the seemingly outlying observations paving way for the individual treatment of such observations and natural variability, in which case outliers will naturally originate as a characteristic of the inherentmodel.

Two broad methods of dealing with the possibility

### of outliers are identification and accommodation.

Identification procedures essentially lead to rejection of

### outlying observations if they are present or, to its

### incorporation into the analysis through a revision of the

basic model or method of estimation or to the realisation### that rogue observations were the result of defective

nechanism that calls for renewed experimentation. On the

suggests, advocate and practice preservation of the possible outliers via appropriate revision of models or methods of analysis or both. The methods of accommodation largely

### depend on the information at the disposal of the anlayst about the process generating outliers or they are so

designed as to be unaffected by outlying observations. In any case the available apriori information, the philosophy towards approaching the problem and the specific goals one has set are important elements in shaping the appropriate

procedure.

1.3 Data generating models

The nul l or worki ng model adopted by the anal yst

### in any practical problem is that X1 ,Xa,. . . ,)(n are

1 ndependent and 1 denti cal l y cii str i buted C i i db obser vati ons from some tar get popul ati on speci fi ed by the di stri buti on functi on FC x , 9) e.9"= {PC x, 6) [Ge Q} whose functi onal form i s

### known except for the par amaters . To faci 1 i tate a

theori ti cal framework for the treatment of the outliers it is necessary to evolve an alternative model.

The» earliest of alternative’ models proposed in literature are the mixture models due to Newcomb C1886). If

### x1,xa,....xn are realisations of X1,Xa,...,Xn the Joint

probability density function Cpdfb of the Xi’s in a nuxmure

model has the form

n

### LC 3 | f.g.p) = Z {C1—p)fCxiD+pg(xi)} . Ci.1)

i=1where x = Cx1,xa....,xn), f and g are density functions and

### O<p<1. It is easy to notice that (1.1) represents the pdf

cﬂ‘ iid. random "variables each of which has distributionfunction

HCx) = C1 — p) FCx) + pGCx).

where Fe$’and Ge§ are distribution functions. However Tukey C1960). while disagreeing with the adequacy of (1.1) vis—a—

### vis its capability to explain the occurence oi‘ outliers,

proposed contaminating models of the form

hC><i) -4 c1~<-,orc><i> + otgﬁxi) .

in which f is the density of the target population and g is the density of the contaminating factor.

In contemporary literature much attention has been received to what are known as k—outlier models. To describe

independent random variables X1,Xa....,Xn, Cn~—k) of which

### are distributed as F e -F while the remaining k are

distributed‘ as G=GCI-7) depending on F‘ and belonging to the class §. Let f and g denote the pdfs corresponding to F and

### G respectively and s. the subset of indices that form the

observations belonging to G. Thus in the k outlier model,S = {s|s =Ci1.ia....,ik). the permutation of k integers out of C1.8....,n)}.

### contains all subsets for-med by choosing k integers out

of n. The likelihood of the sample is then### L C; | f,g) = n f(xi) n gCxi), s e S. (1.8) i es ies

### If TCx1.xa,...,xn) is a symmetric statistic, then its

distribution does not depend on s and further there does not

### exist any non*-trivial sufficient statistics in this case.

### It is shown in Kaie(1976) that the k largest order statistics Xcn_k+1)...XCn) are most likely’ to be

observations distributed as 6.

### Another significant contribution in outlier generating models is that of the exchangeable class

introduced by KaleC1969). The idea here is that any set of

### k random variables out of CX1.Xa,....Xn) has an equal

probability of being distributed according to G while the other random variables have distribution specified by F‘. In this case the likelihood takes the form-Q-01

### L.C>;|f.g) = ZS in fCxi)if| gCxi). c1.:-2) S6 ES GS

Si nce the random vari abl es ar e exchange-abl e , unl i ke i n

### (1.2), in (1.3) the order statistics are sufficient. which

render s i nfer ence based on i t desi rabl e. I n a vari antapproach Barnett and Levi s C 1 984) consi der ed the noti on oi‘

label l ed model whi ch speci fi es the model i n terms of the

1

### distribution of the order statistics assuming that the

' |largestﬁsmallest) k observations arise from G and the rest

### belongs to F‘. Thus CXc1>....,X(n)) is distributed as

CYc1),. . . .Ycn_k),ZC1).. . . ,ZCk>) where Y’s following F and

### 2's following G. such that max Y1 5 mi n Zj, iﬁiﬁn-k 1SJ$ k '

i=1.8,...,n—k. Hence the likelihood is

Cn—k)'k'

### LC r, 3 = -aft rc > c .3 . 5| 9 wkcr-".<s> H X1 "9 X;

where w,vkCF‘.G) is the probability that max Y1 5 min ZJ.

to treat the elements of the subset s as known. giving rise to a pooled sample of n comprising of (n—k) from F‘ and k from G. The likelihood takes the simple form

### L. C>;|1‘.g) = |‘| fC>-<1) n gCxJ). (1.4) ies Jes

for a given s.

1.3 Bayes inference

Among the various researchers who have used the Bayesian approach. many look upon the problem of estimation

### rather than identification of outliers. The present study

also makes use of the Bayesian approach to estimation in### specific distributions assuming the existence of joint

probability measures on ®xJ€, where ®cRk is the parametric

### space c or r espondi ng t o a vec t or of par amet er s

9=C61,9a...9k) and 36‘ is the sample space. This Joint measure is determined through a prior measure on ® and the conditional measure on 36‘ for a given 6 in ®. which in turn provides the posterior measure on ® for a specified x in X along with a marginal measure on X. In this formulation the### posterior density function of 9 can be obtained through

Bayes Theorem as Ckaiffa and Schlaifer (1961))

### f‘C6|a;) = <6-C9)£Cg<_|8)CC>5), (1.6)

where ¢C6) is the prior density and C(51) is a normalising constant independent of 8 given by

éfC9|>;) d6 = CC>__<_I> 3¢C9)£C>;|6)d6 = 1 _ (1.6)

### Throughout the sequel we denote by C vi th or without suffixes, such normalising constants attached to the posterior density. In finding point estimates of 6 we

employ either the mode of (1.5) or make use of the quadratic loss function

### L.C§C§),6) -= c§c;<_>-e>'3 . c1.?'>

to prescribe the estimate as one that minimises

### E z.c§c>_o,e> =j c§c>_Q-e>a rceppcae. c1.e>

®

Or

### 5:29 == Ecepg. (1.93

The expected loss, resulting from the use of Ci.9I> as the estimator of 9, is the posterior variance of 6. Since (1.9) is calculated for a specific sample point >5, some times it is of advantage to look at the Bayes risk.

### Rc§.e> = 1_c§,e>cc>_§|a> ¢ce> <1; d9. c1.1o>

QH

R8

C6LCg_<_),8UC>_Q). that is to find two values 6L and 6U such

### that the interval C6L,9UD has a significant posterior

probability for 9. are obtained as solution of the equation

### f fC6|gQd6 = 1 — a. c1.11>

euat

Sﬂnce there can be more than one set of (9L.6U) that satisfy

### (1.11). inorder to render the estimate unique. often the

conditions

### I fC6|;)d9 = a/8 = I fC6|;)d6 c1.1a> 9L w

U

are also imposed.

### What has been outlined so far in the present

section is the general Bayesian framework applicable to all

### inference procedures, including the situation when the sampbe contains outlying observations. The Bayesian

approach postulates the existence of prior distributions for the elements s in S as well as for the parameters in F and G involved in a k-outlier model represented by equation (1.8).Thus the mixture and exchangeable models provide examples of

### assigning distribution to s and they are amenable to a

### fuller Bayesian analysis when the parameters are also

assumed to have appropriate prior distributions. Among the various researchers who have used this approach. many look upon the problem of estimation rather than identification of

### outliers. Restricting our attention only to specific

pmobability models we review the important developments in this context. Of these. Box and Tiao C1968) presented an extensive systematic analysis for the normal case. Analysis of data from normal population containing outliers is also discussed in Guttman(19?3). Guttman and Khatri C1975),

O’Hagan (1979) and Goldstein C1968).

It seems that Kale C1969). was the first author to cﬂscuss the Bayesian methods for analysing outliers in the exponential samples. He obtained a semi—Bayes1an estimator of 9. with FCx;6D = exp(8) and GCx;6A) = expC6X), A21 in the

presence of an outlier using a beta prior for 7x leaving 6 without being assigned any prior distribution. Under the same exchangeable model with F having an exponential

### distribution with mean 6 and G having an exponential

distribution with mean e/c. O<c<1, SinhaC1978) obtained the Bayes estimate of the survival function with a beta prior

### for c and no prior attached to 8. In a later paper

### for c along with three possible prior families for 6 in order to estimate these parameters and the survival

function.

Lingappaiah C1976) investigated the estimation

### problem in the presence of outliers for a more general

family that included the gamma, Weibull and exponential models as particular cases. The basic model has the pdf‘01-1

### rc><,a.b.n>= ga-)5; n°‘/b exp:-nxbl. ><>o. c1.1:a>

He considered the situation where among n-observations n-k

### are distributed as (1.13) and k of them following

fCx,o:,.b.6I_;?). r=1.8....,k. O<6r_<1. With an exchangeable model for outliers. he obtained the Bayes estimate of 8r and

### (3, usi ng exponent i al pr i or for (3 and bet a pr i or

distributions for 6r, for fixed k.Dixit C1991) obtained the Bayes estimates of the parameters and also the power of the scale parameter for the gamma distribution, under various prior-s in the presence of

### k known outliers. He assumed that the random sample

### x1,xa.... .xn‘ oi‘ size n are such that k of them are

distributed as

### fCx;o/oz) = - xp 1 exp [—(oo</0)]. (1.14) P _

where x>O. 0>O, our-*1 and p is known and the remaining 'n—k'

random variables are distributed as fC><;oO. With a beta prior for a and inverted gamma and quasi -priors for 0' the Bayes estimate of‘ or under the loss function

### rr rb rs r02

### LCg ,0) =Co'D {(9) -Co')} (1.153)

was derived.

### It appears that the latest work in this category

concerning the exponential model is that of Kale and Kale (1992). They assume that X1,-K8,. . . ,)(n are such that ‘n-k’ of these are independent identieally distributed as exponentialwi th mean 9 having pdf.

### fCx:9) = 1/6 exp [—(x/8)], x20, 92:0, C1.16I>

### while the remaining k observations X .X ....,X are iid s1 sa sk

exponential with mean 6/on where O<a S1. The indexing set of observations s = Cs1.. . . ,sk) is treated as a parameter, over S, the subset of k integers out of n. ‘With uniform prior

### over S for s and three other priors for 6 and om viz.

inverted gamma x beta, quasi —prior x beta and Jeffrey's

also gave two methods of which the first, with the aid of

### the predictive distribution of‘ KOO given

x(1),xCa),. . . ,xck_1), explains how to determine the unkown

### number k of outliers that label <XCn_k+1)....XCn)} as outliers. The other method depends on the posterior distribution of the indexing set s = Cs1...sk) in the

^{I}I

determination of the number of outliers. The two methods

### have also been illustrated in the case of a real data

situation available in Nelson (1988).

Another problem of interest in the area of outlier analysis is the prediction of a future observation using a random sample in which one observation is an outlier. The idea behind such a prediction. as described in Dunsmore (1074) is to provide either a point or an interval estimate for a future observation. Lingappaiah C1989a) used this idea to construct prediction intervals for the maxima and minima of future observations when the samples are from an

### exponential distribution which contain an outlier. In a

later paper Lingappaiah (1989b. 1990) obtained the one-sided

### Bayes prediction interval for the rthordered future

observation in the presence of an outlier when the sample are from gamma and Weibull distributions respectively.

1.4 The present work

As already discussed. a familiar topic in the vast amount of literature available on outliers is the problem of estimating parameters of specific probability models like

the normal. gamma. ‘deibull. exponential etc when the data is known to contain one or more spurious observations. Inspite

### of the popularity of the Pareto law in analysing data on income. city population sizes. occurence of natural resources. stock prices fluctuations, insurance risks, busi ness fai l ur es . rel i abi l i ty etc , the pr obl em of

estimating its parameters in the presence oi‘ outliers does rum appear to have been considered in literature. Further.the model"-belong to the class of long--tailed distributions and as such. the appearance of extreme observations in the sample is quite common and their identification as outliers or not becomes important. Accordingly the main theme of the present thesis is focussed on various estimation problems

### using the Bayesian appraoch, falling under the general

category of accommodati on procedures for analysing Pareto data containing outliers. We also derive some results that

### are pertaining to the exponential population that have

relevance to life testing and reliability.discussions included in the remaining chapters.

### In Chapter II. the problem of estimation of

parameters in the classical Pareto distribution specified by the density function.

### fCx:a.o) = aaa x“ca+1) , x2a>O. a>O, (1.17)

under the k—outlier exchangable model is presented. Thus of the n observation Cn-k) are distributed as (1.1?) while the remaining k follows the» same type’ of distribution with density function.

gCx:a.b.oD = aboqb x_Cab+1) . xZa>O,a,b>O, C1.18) umere b is assumed to be known. Notice that when b<1 the

### cﬁscordant observation is a lower outlier. while b>1

indicates an upper outlier. With the above assumption we obtain the Bayes estimates of a and 0 under quadratic loss, in the two situations when the scale parameter cr is known assuming a gamma prior for a and when 0 is unknown. with a Joint gamma—power family prior. It is also shown that our results reduce to those of Arnold and Press C1983) once we take b=1. A comparative study of the estimates is provided vdth the aid of simulated samples.I n Chapter I I I 1- the esti mati on probl em i s concieved in a more general and realistic situation in which the shape parameter of the contaminating distribution is

also not known. Under the above model assumptions and prior distributions for a and 0 and non—informative prior for b we obtain the Bayes estimates of cx,b and 0 in the two cases

when cr is known and unknown.

. Since the Pareto distribution is extensively used

### as a realistic model for personal incomes that exceed a

specified level of income, the estimation of the survivalfunction

### Rc><> = P[X>xJ -= c></<=o'*°‘ . (1.19)

### is often an important objective. Equation (1.19) also represents the reliability function in the context of life

testing. where the Pareto model characterizes life times### that have failure rate of the form. ax’-1 which is ever

increasing. In Chapter IV. we discuss the estimation of (1.19) when the sample contain a known number of outliers under three different data generating mechanisms, viz. the exchangeable model , the identifiable model and the censored### model that utilises only the first Cn-k) order statistics

### for estimation after identifying the last k as outliers. In

this investigation we assume that b>1 and that the scaleknown. The behaviour of the point and interval estimates obtained in all the three cases are also studied by varying

### the sample size and the hyper—parameters of the prior

di str i buti ons .

As a natural continuation of the Bayesian frame

### work proposed earlier, we consider in Chapter V the

prediction of a future observation based on a random sample that contains one contaminant. The object of the inference

### is the rth prospective order statistic from the Pareto

population (1.17). We present a 1OOC1—(:D% predictive

### interval for order statistics in both the cases where the

shape parameter of contaminating distribution is known and

unknown.

Chapter VI is devoted to the study of estimation problems concerning the exponential parameters under a k—outlier model. Assumi ng the exchangeable model for the outliers. Bayes point and interval estimates are obtained

### for the parameters and the survival function. We also

suggest a method to determine the number of outliers present in a sample of size n using the predictive density.

The problem of obtaining a 1OOC1—fJ)°/. predictive

### interval (two sided) for future order statistics from the

### exponential population in the presence of‘ outliers is

investigated in Chapter VII.

In The last chapter (Chapter VIII) we consider the estimation of R = Pl X>Y] when X and Y are independent

### exponential random variables and data on each of them

contain a discordant observation. The problem has relevance in the context of analysing the reliability of a component with strength X. which is subjected to a stress Y, where X

### and Y are exponentially distributed and stress is

independent of strength. The component fails whenever Y>)(

### so that R is a measure of component reliability. The esti mates of R are der i ved under the exchangabl e .

identifiable and the consored models.

### In this chapter we discuss the problem of esti mati ng the parameters of the cl assi cal Pareto

distribution specified by the density function,

### fCx;o'.oO = aoa x-(an), x20>O. a>O, (2.1)

### in the presence of k outlying observations using the

Bayesian approach.

The use of the Pareto distribution as a model for various soci -economic phenomena dates back to the late ninteenth century when Pareto observed that the number of persons whose incomes exceed x can be approximated as cxﬁa.

000. Arnold C1983) gives an extensive historical survey of

### its use in the context of income analysis and also the various proper-ties of the distribution. Though initially

the Pareto distribution was used as a model for personalincomes and influenced the development of measures of income

### i nequal i ti es , l at er i t has acqui red pr-omi nence i n

The result in this chapter is due to appear in Jeevanand and

Nair (199220.

theoretical studies as a long tailed distribution as well as in several other areas of scientific activity some of which

were mentioned in Section 1.4.

Studies on Bayesian inference procedures for the Pareto distribution when the sample is homogeneous have been discussed in Muniruzzaman C1968), ~ Malik C1970) Zellner C1971), Rao Tummala C1977‘) and Sinha and I-lowlader (1980).

### where they take the scale parameter 0 as known. Lwin

(19?8,1974) developed estimates of the both shape and scale

### parameter s usi ng a J oi nt natur al conjugate pr-i or

distribution for oz and 0'. Attributing L.ewin’s prior to be unnaturally restrictive. Arnold and Press C1983) suggested a gamma-power prior distribution for Ca,oO which also provide

### a posterior distribution belonging to the same family.

Later the same authors CArnold and Press C1986. 1989))

### extended these results for grouped and censored data.

Inspite o1"~the wide applicability of the model (2.1), it Seems that the problem of inferring the parameters of the Pareto population (2.1) in the presence of outliers has not

### yet been considered in literature. When some of the

Observations are infact contaminants. special inference procedures are required and this motivates the discussion in the present Chapter .We assume that >_<_=Cx1.xa,... .xn) is a random fﬁtample from (8.1) containing k outliers Ck known) but which

### of them are outliers is not known. Thus of the n

### observations Cn—k) are distributed as (8.1) while the remaining k follow the same kind of distribution with

density function

gCx:a.b.o') = ab crab xﬁcabﬂj. ><2o'>O.b>O. (2.2) where b is assumed known. In this exchangeable model . the likelihood can be written according to (1.2) as

### -1 n

K2S‘a’b;,a> = an bk o[n+Cb 1)]-<10: C" xiCc<+1))_{i=1}

k

2* ( 11 ><;°‘Cb"1>), ca.a>

### J=1 J

‘where

n—k -1

### Ex=A§1l.Ak

J>

7"'[\/15

|-~

r+

When b=1. the product over J in (2.3) reduces to 1 so that the multiple sum is the number of ways of filling k-tuple

### CA1,Aa,...,Ak) with integers from 1 to n for which

A1<Aa....,<Ak, which is [E].

Customarily. the estimation problem is discussed by distinguising three cases; when one of the parameters is known and when both are objects of inference. However. the case when om is known and 0' has to be estimated rarely occurs in practice and hence it is omitted from the present discussion.

2.3 Estimation with known scale parameter

Since 0 is known. the form of the likelihood (2.3) gives the kernal as

### k(a|§p = a“ e"“ .

### so that the prior belongs to the gamma family. Thus we

choose the prior density as

### Dr r*1 —at’

### ¢CoO = rt-F; Q Q p 1"',i¢',O\>o>

f'\

6*’

v

and the pasterior density from (8.4) and (2.3) turns out to

be

1"(a|>_g_J = £Cg|oO¢(oO .

### _ _ _ _ _ _t'

= C11 an e ta {§% e aCb 1)tA] ar 1 e a ,

= [C1Cm.'D]_1 Z‘ a“‘_1 exp (—o:[T+Cb-1)tAJ}. a>O. ca.s>

### where C with various suffixes denote the normalising

constants and

T =t +t’. m= n+r . t = Z logC;< /0),_{n}

### A _ e~A 1-1 i

k### and t = )3 1ogCxi/0).

_{i=1}

oo

Now. to obtain C1Cm,T) we have J‘ f‘Ccx|§)dot == 1 so that_{O}

### _ m—1 _ _

(D<:1<m.T> - _[ Z‘ ox exp < o:[T+Cb 1>1.A1> don,

0

### = rcma 11* £’I‘+Cb-1)tA]-m . ca.e>

One can have the estimator for a by specifying appropriate loss functions and using (3.5). Under quadratic

### loss, the Bayes estimator of on according to (1.9) is the mean of the posterior distribution (8.5). Thus the Bayes

A

estimate a1 is

®

Q1 -= EZ(a|>_Q = cc1cm.T>1"1 f Z am exp <-o:ET+Cb—1)tA]}doz.

0

= [C1Cm.T')]_1I"Cm+1)E* :T+cb-1>1.A1‘C"‘*“ ,

### = C1Cm+1.T3/C1Cm.TD. (2.7)

/\

The loss incurred when a1 is used as the estimation of a is

vcq |>o -= E60: - Q >3

### 1 - 1 ’

### = " 9

^{1}

### = [C1(m+2.T)/C1Cm.T)] - Zia. ca.~a>

Deductions

### 1 In (8.7) as t’ and r tend to zero we have

### Q1 = C1Cn+1,t)/C1Cn,t). ca.9>

which is the estimate corresponding to non—informative

improper prior of Jeffrey C1961).

8 When b=1 in (8.7). the resulting estimate

Q1 = m /cu-1.’).

is based on sample from (2.1) without contaminants and is the expression obtained in Arnold and Press C1983). In this

### case if t’ and r tend to zero, one has the expression

of a.

2.4 Estimation with unknown scale parameter

We can now look at a more general data situation when both the scale and shape parameters remain unknown.

The kernel of the likelihood suggests the following form for the Joint prior density for a and 0

¢Co:,0O = c J a"°"'1 81'“, a>0.0<o'$o* . 0 ,z’.u>O. ca.1ot>

### a 0 0

The corresponding posterior distribution is

_ _ , —a[z+Cb—1)z 1

### f(a,a|Ep=c3 Enan a[n+(b—1)k]a ar oua 1 e z a e A ’

=C3 2* an+r 0[n+(b_1)k + ula exp<~a [z+z’+Cb-1)zA]}.

=C3 Z*am aUa_1 exp(—a[S+Cb—1)zA]), (8.11)

where

k

S=z+z’. U= n+u+Cb~1)k , ZA = Z{logCxA 3.

### i=1 i

n

z = [j1ogCxi) and A = minC x(1).ab)._{i=1}

### 2.4.1 Estimation of a

### From equation (2.11). the marginal posterior

density of a isA

f<al§> = C3 E; f am oU“'1 exp(-a[S+(b*1)zA]>d0.

### _ m Ua _ _

- C3 2* a CA /U0) exp( a[S+Cb 1)zA]}.0### -1 -1

= c4 5* am exp(—a[S+Cb—1)zA - u 1ogxJ>.

### _ -1 m-1 __ _ -[C4(m.S1)J 2* a exp{ a[S1+(b 1DzAJ}. (2.18)

where

sh = s - u logk and C4Cm.S1) = rcmn 2* £S1+Cb—1)zA]_m.

The Bayes estimates for a under quadratic loss is

### aa = ECa|§J = C4Cm+1,S1)/C4Cm,Si) ca.1s>

with expected loss

VCaa[§Q = EC4Cm+8.S1)/C4Cm.S1)] — aa . C8.14)* 8 Deductions

### 1. The estimator corresponding to Jeffrey's prior is

resulting estimator is got as

aa=C4[n+1.2-Cn+Cb—1)k)1ogxk1)1/C4[n.z—Cn+Cb—1)k)logxC1D1.

(2.15)

2.. Setting b='l in (8.13) the Bayes estimator based on the

uncontaminated Pareto sample in Arnold & Press (1983)

### 38 = m/(S—Cn+u)1ogk). (3.16)

is obtained.

2.4.8 Estimation of 0

The marginal density of a is

Q

Kala) = C3 2,, I am ova“ exp(—a[S+Cb--1)zA]}dot.

O

### -1 -c 1)

= c3 13* <1 rcm+1> [S+Cb-_1)zA—Ulogcr} “‘+ .

### __ -1 _ -C m+1 D -1 ~ CS 2* [QA logo] 0 . O<o SA. (8.17)

where

QA = [S+-Cb-1)zA]/U, (:5 = 3* Wp(O,m+1)

### and p = QA- logk.

To obtain the value of C5 we consider the integral

7\

ICQC3 = II,‘ I QC [QA- logo']_Cm+1) 071 do.

O

Setting y=QA-log 0'. we have

QA-logk

ICo'c) = Z,‘ J‘ y_Cm+1) exp{cCQA—y)} dy,

O

= 11* exp[<:QA] ‘i'ip(<:.m+1).

so that

### .. 0 _.

C5 — 1C0 D - 2* \'fp(O.m+1).

The ‘WC . D _functi on given above is related to the we]. 1 known exponenti al i ntegral EImC . ) C Abramowi tz and Stegun ( 1 9'78) D as \'(CCb. m) ='- cl _m EmC bc) . The value of WC . D can be read

from the tabul ated value of Em( . D given by them for the integer values of m and by inter pol ati on for non-i nteger values. The estimator of a under the squared error loss is

### A 1; exp[QA] W C1,m+1) 0'1 = EICa|>;) = » e (2.18)

Z,‘ \'!pCO,m+1)

with expected loss

2* exp[8QA] WuCE.".m+1) A 2

### vca1|p = e e -;_ ____ tMe_e- e - 01 . (2.19)

Z,‘ \'!uCO>m+1)

### to tend to zero to give

A

### = w , c1. 1) w , co, 1). c .

### oi E; QA_1og xcl) n+ / Z, QA_1Og xclj n+ a 20>

with Q'A = £z+Cb—1)zA)/ [n+Cb—1)k]

In the absence of outliers (8.19) reduces to

Z1 = e6W6C1.m+1)/W6C0.m+1) ; e = {S/Cn+u)]—logA. ca.a1>

4 I

### 8.5 Discussion

### In order to assess how the various estimates

behave in a specific situation a random sample of size 19 of

### which 18 comes from population with pdf (8.1) with parameters o:=8. 5, o=15O and single observation with

parameters o=15O and ab=15 Ci.e.b=6) was simulated producing the following observations:

158.8618

811.899

198.8853 175.9670

173.4848 157.9960 887.4415 198.8889

183.9583 173.8940 165.8673 853.8669

1 67. 7641

808.8048 170.4876 803.0875

177.0848 594.9689 183.9489

The estimates of the parameters derived in Sections E2. 2

### through 8. 4 based on the above sample are exhibited in

Tables 2.. 1 . 8. 8 and 8. 3. The losses corresponding to the estimates given in each cell are shown in braces below each?

2

3

### entry. It is to be noted that .01 are the Bayes

HQ

NQ

estimates discussed in Arnold and Press C1983). Further. to learn the sampling behaviour of time estimates, samples cf

### sizes 10,30 and 50 were also generated for the above

parameter values and the» bias and expected losses were calculated. The results obtained are given in Table 2.4 to 8.7. In all cases. the hyper—parameters of the prior were### chosen as u= 0.1.0.001, r=i.8,3 and t’=1,8.8.5.3. The

computation of the Bayes estimators and the corresponding risk are done on the mainframe computer using Fortran 7'7.

The evaluation of the exponential integral is done using the fortran subroutine avaliable in mathematical library of IMSL and those programs are given in Appendix.

It can be observed that the bias and expected loss associated with the estimates C8.7),C2.8) and (8.18) in the present work are considerably less than those of Arnold and

### Press (1983) in almost all cases. Thus the procedure

outlined provides improved estimates. justifying the choice of G in the accomodation approach. For moderate values of

t’. while the expected losses become smaller under the same condition. However as the sample size increases, the prior parameters have lesser influence on both the bias and the expected loss, and the estimates become closer to the true

### parameter value. An interesting feature of the proposed

estimates is that even for very moderate sample sizes. our approach substantially improves upon the estimates of Arnold and Press C1983), irrespective of whether 0' 1M5 held knownor unknown.

Table 8.1

Estimates of a when 0 is known for samples from Pareto distribution

with a B 8.5, 08150, b I 6

### r L

^{I}&1cArno1d & Press) Q1cPresenu Study)

### 1 8 1 1

### 1 8 1 3

### 8 1 8 8 8 8 8 3

### 5 8.

### 5 8.

818 516) 768 383) 589 335)

431

896) 373 548) 906 408) 718 358) 553

31 O)

C

3

O

8

CO

8

CO

8

CO

3

CO

8

CO

8

CO

CO.

8 O16

479) 604 355) 438 310) 898 873) 175 506) 74 374) 565 387) 418 888)

Cont

### r 2.’ Z<1cAmo1d a Press) $1cPrese.-m, Study)

### 3 1 3.

### 3 3 3.

### 3 2.5

### 3 3 8.

Non-informative prior

534 568)

O45

481) 848 369) 674 385) 636 696)

334 533) 876 394) 692 344) 531 303) 407 648)

Table 8.8

Estimates when 0 is unknown and ab < x(1) (00 I 150)

u=0.001

H H H9 _ _';' ' 77-“ _ _ 7::-' - —; " _-77 ' ' *

Z! 38 ‘*2

### °'1 01

1

8

8.5

3

1

8

8.5

3

815 517) 770 384) 590 366) 443 896) 376 543) 908 403) 780 358) 554 311)

CO.

CO.

CO.

CO.

CO.

CO.

(O.

CO.

O18

480) 605 355) 439

31 O)

893 874) 177 507) 748 375) S67 387) 413 888)

### 146.994 147.960

c2.169x1o"4> C8.1S6x1O_ >

### 146.871 147.694

c2.166><1o 4:1 c2.161><1o 2

### 146.124 147.019

c2.161><1o 4) c2.1a6><1o 3

### 146.857 146.881

c2.167><1o_4> c2.161><1o" >

### 146.891 147.861

c2.166><1o 4) c2.166><1o

### 146.974 147.694

<2.166><1o 4) c2.161><1o

### 146.120 147.019

c2.16o><1o 4) c2.161>¢10

### 146.966 146.992

c2.166><1o'4> c2.16:-s>¢1o' cont

### 1 3.

### 8 3.

### 8.5 8.

### 3 8.

536 568)

O47

488) 849 369) 676 386)

3.

CO.

8.

CO.

8.

CO.

8.

CO.

337 533) 878 394) 694 344) 553 303)

146.894 C8.157x1O 4)

146.872 ca.15sx1o"4>

146.181 ca.1s4><1o 4)

146.855

ca.167x1o"4>

147.660 C8.1S7x1O_4)

147.693 ca.1sox1o'4>

147.017 C8.13x1O 4)

146.881 ca.161>-;1o"4>

u=O.1

### 1 3.

### 8 8.

### 8.5 8.

### 3 8.

### 1 3.

494 610) 974 448) 768 383) 589 335) 668 641)

3.

CO.

8.

CO.

8.

CO.

8.

CO.

3.

CO.

887 569) 795 410) 604 355) 438 310) 451 600)

147.501 ca.1a7x1o'4>

147.455 ca.1ao><1o 4)

146.13

C8.178x1O_4) 147.384 ca.174><1o 4')

147.591 ca.1a7x1o'4>

147.977 ca.176x1o"4>

147.645 ca.174><1o 4)

145.59

ca.1se>¢1o""‘>

147.461 ca.1sx1o'4>

147.ea7 ca.177x1o'4>

cont...

### & W I W 7 w if V ‘"7 7

^{A}

### " Z’ “a “2 31 31

### 8 8 3.

### 8 2.5 8.

### 8 3 8.

### 3 1 3.

### 3 8 3.

### 3 2.5 3.

### 3 3 2.

r I

Jeffery’s prior

183 464) 907 408) 719 358) 843 5?1)

871 485)

O45

481) 848 359) 494 355)

941

433) 740 374) 555 387)

O89

538)

O89

455) 877 394) 593 344) 758 311)

### 147.455 147.645

ca.1e1x1o'4> ca.176x1o'4>

### 145.533 147.51

ca.17ex1o”4> C2.171x1O_4)

### 147.384 147.450

ca.174><1o 4) ca.16a><1o 4)

### 147.606 147.669 It II

ca.1e9><1o 4) ca.176><1o 4)

### 147.464 147.691

ca.1aax1o"4> ca.174x1o"4>

### 146.666 147.661

ca.17sx1o"4> C8.178x1O_4)

### 147.388 147.58

ca.17x1o"4> ca.161x10'4>

### 146.671 147.124

ca.167x1o'4> ca.1s6x1o“4>

Estimates when 0 is unknown and 00 > xC1)

### uII0.001 1*

3v A

### r z’ aa aa 31 oi

^{A}

369 568) 883 416) 689 368) 580 318) 537 596)

O87

436) 884 380) 646 333)

808 548) 74 394) 556 348) 397 899) 371 578) 883 41 5) 690 360) 581 361)

### 150.018 149.993

ca.a51><1o ‘D C8.883x1O >

### 149.455 149.455

ca.as4><1o 4) ca.ao4><1o >

### 149.804 149.39

ca. a35><1o'4> ca. asa><1o" >

### 148.438 149.039

C8.831x1O-4) ca.aa1x1o" 3

### 150.019 149.994

C8.858x1O 4) (8.283x1O >

### 148.456 149.455

ca. a:aa><1o'4> ca. ao4><1o' >

### 149.808 149.39

c a. ase><1o'4> ca. asa><1o' >

### 148.489 149.101

ca. 888x1O_4) ca. a15><1 0" >

cont

### F 2'.

^{?}

_{aa}

^{‘\}

### Ga 0'1 0'1

### 3 1

### 3 8 3 8 3 3

3.706 (0.684)

3.178 (0.457)

### 5 8.958

(0.398) 8.778 (0.349)

3.540

### 150.01? 149.991 -4 -4

### (0.608) (8.850x10 ) (8.881x10 )

3.087 (0.437)

8.884 (0.379)

8.647 (0.338)

### 148.456 149.458

ca.aa1x1o“4> <a.aoex1o'4>

### 149.804 149.89

C8.838x1O 4: C8.886x1O 4)

### 149.429 149.020

ca.aaax1o'4> ca.o1?x1o'4>

l.lI0¢1

### 1 1

### 1 8 1 8 1 3 8 1

3.677 (0.676)

3.106 (0.488)

8.888 (0.415)

8.689 (0.361)

3.861 (0.710)

3.497 (0.649)

8.851 (0.458)

8.749 (0.393)

8.556 (0.341)

3.688 (0.685)

### 149.039 150.078

ca.aa1x10'4> ca.aax1o"4>

### 148.985 149.488

(8.883x1O 4) (8.881x10 4>

### 149.775 149.255

C8.889x1O_4) ca.aa5x1o'4>

### 148.981 149.885

ca.a1e><1o 4: ca.a1'?>¢o 4:

### 149.041 150.069

ca.aax1o'4> ca.aa1x1o'4>

cont...

### °2 °1 °1

### 8 8 3

CO

### 8 8.5 3

CO

### 8 3 8 3 1 4

CO### 3 8 3

(OCO

### 3 8.5 3

### 3 3 8

CO CO### Jeffery’s 3

### prior CO

861 507)

O86

436) 883 379)

O45

744) 417 531) 17 457) 957 398) 845 778)

106 484) 888 415) 689 360) 868 781) 868

437) 883 379) 648 748)

### 148.988 149.486

C8.883x1O 4) C8.881x1O 4)

### 149.701 149.299

c2.229x10"4> C8.886x10_4)

### 148.984 149.885

c2.21?><10 4) c2.217><10 4)

### 149.021 190.041

C8.823x1O 4) C8.88x1O 4>

### 149.799 149.041

c0.909> C8.881x1O 4) C8.881x1O 4)

### 9.029 149.v91 149.129

c2.219x10”4> C8.816x1O_4)

### 149.914 149.229

C8.813x1O_4) c2.207x1o"4>

### 150.996 150.088

c2. 299>41 0'4) <2. 299><10"‘*>

Table 8.4

Absolute bias and expected loss of a when 0 is known for different sample sizes.

### n=10 h=30

^{n=5O}

### r L ’ 0:1 0:1 0:1 0:1 ^{‘*1 °‘1}

1

1

1

8

8

8

3

1

8

3

1

8

3

1

507 498) 818 389)

O37

836) 385 548) 659 368) 904 859)

881

593) 0.

(0.

0

(0

O

(0.

0.

(0.

0.

(0.

0.

(0

0.

CO

889 361) 688 859) 906 185) 848 394) 458 888) 458 818) 158 487)

.468 .866) .386 .888) .530 .187) .383 .875) .410 .889) .491 .194) .809 .885)

885 889) 831 194) 404 167) 156 836) 863 800) 361 178) 134 843)

0.365 (0.110)

0.309 (0.100)

0.888 (0.091)

0.848 (0.118)

0.301 (0.108)

0.155 (0.093)

0.144 (0.115)

0.804 (0.089)

0.186 (0.088)

0.117 (0.076)

0.109 (0.091)

0.178 (0.084)

0.039 (0.078)

0.108 (0.093) cont...

### r t ’ 3:1 0:1 gal 0:1 2:1

^{‘*1}

0.

(O.

### 0.506 0.380 0.370 0.875 0.858

### (0.395) (0.306) (0.837) (0.806) (0.104)

### 3 8

### 0. 771 0. 609 0. 453 0. 317 0. 181 O.

### (0.883) (0.830) (0.800) (0.177) (0.095) (0.

### 0.315 0.139 0.887 0.130 0.158 0.

### (0.736) (0.490) (0.315) (0.865) (0.119) (0.

### 3 3

Jeff—

ery’s prior

110 080) 040 079) 119 O94)

Table 2.5

Absolute bias and expected loss of a when 0 is unknown

and

### 0° < x (

^{(1)}

^{O’}

^{0}

^{I 150)}

U I 0.001

n=1O n=3O n=5O

I“

-"v

### z’ a

_{1}

_{°‘1}

_{31}

^{A _ ,}

^{Iv}

^{/\ "'0}

1

1

1

2

8

2

3

1

CO.

2

CO.

3

CO.

1

C0.

8 C0.

3

CO.

1

CO.

458 493) 543 33) 680 837) 067 543) 456 363) 540 860) 868 594)

O

C0

O

(O 0

CO O

C0 0 C0

O

C0

O

C0 331

100) 626 083) 905 O70) 485 109) 450 O91) 757 O77) 519 118)

O.

CO.

O.

CO.

0.

CO.

O.

CO.

0.

C0.

O.

(O.

O.

C0.

739 866) 798 882) 848 188) 699 276) 759 889) 815 194) 644 885)

C0.

CO.

CO.

CO.

CO.

CO.

CO.

833 112) 309 100) 404 089) 806 116) 263 103) 361 O98) 818 119)

635 110)

741

100) 836 O91) 575 113) 684 108) 788 093) 515 155)

204 O61) 088 O5?) 166 O54) 156 O62) 173 O58) O38

055) 108 O64)

cont...

Absolute bias and expected loss of a when 0 is unknown

### and 00 < x C00 I 150)

^{(1)}

U I O. O01

### n=1O n=3O n=5O

1 I

I” Z^{I}

31 °‘1

### °‘1 °‘1 ‘*1

^{Q1}

1

1

1

E

8

2

3

1

8

3

1

8

3

1 1 CO

1 CO

1 CO.

1

CO.

1 CO 1 CO.

1

CO.

468 493) 543 33) 620 837)

O67

643) 466 363) 640 860) 862 694)

O.

CO.

O.

CO.

O.

CO.

O.

CO.

O.

CO.

O.

CO.

O.

CO.

331

100) 686 O83) 906 O70) 425 109) 450 O91) 757 O77) 619 118)

739 866) 798 282) 848 188) 699 276) 759 889) 815 194) 644 266)

833 118) 309 100) 404 O89) 806 116) 263 103) 361 O92) 818 119)

635 110) 741 100) 836 O91) 575 113) 684 102) 788 O93) 516 166)

CO.

O.

C0.058) O;804

CO. O61) O88 O67) 166 O64) 166 O68) 173

O38 O56) 108 O64)

cont...

### n=1O n=3O n=5O

I‘

### Z Q1 G1 G1 (21 G1 (X1

3

3

### 8 1.190

(0.396)

### 3 1.460

0.873 (0.098)

0.608

(0.883) (0.083)

0.785 (0.837)

0.783

### (0.800) (0.

### 859 0.688 0.817

106) (0.104) (0.059)

### 317 0.788 0.041

095) (0.095) (0.056)

11-001

1

1

1

8

8

8

1.400 (0.685)

1.500 (0.398)

1.584 (0.877)

0.469 (0.110)

0.446 (0.091)

0.778 (0.076)

### 1.300 0.568

(0.690) 1.409 (0.438)

1.500 (0.304)

(0.130) 0.898 (0.107)

0.617 (0.090)

0.708 (0.894)

0.764 (0.848)

0.881 (0.803)

0.667 (0.304)

0.738 (0.851)

0.788 (0.810)

148 119) 858 106) 358 094) 897 187) 858 188) 313 100)

0.

(0.

0.

(O.

0.

(0.

0.

(0.

0.

(O.

0.

(0.

578 116) 690 105) 790 095) 516 118) 638 107) 735 098)

0.053 (0.

0.

(0.

O.

(0.

063) 803 059) 019 055) 0.098 (0.

0.

(O.

0.

(0.

065) 811 060) 063 056) cont

### n=1O n=3O n=5O

### r Z ' °‘1 °‘1 °'1 °‘1 °‘1 °‘1

### 3 1 1.

CO.

### 3 2 1.

CO.

### 3 3 1.

CO.

200 756) 318 479) 417 338)

### Non 1.448

### inf. (0.736) CO.

prior

668 180) 383 990) 456 O83) 580 111)

CO.

CO.

CO.

CO.

638 314) 695 859) 755 817) 706

31 5') O.

(O.

O.

CO.

O.

CO.

O.

CO.

544 123) 164 109) 369 O97) 139 183)

454 121) 573 109) 680 100) 581 119)

O49 O66) O57 O61) 145 O57) 130 O64)