• No results found

Paper: Regression Analysis III Module: Linear Mixed Model

N/A
N/A
Protected

Academic year: 2022

Share "Paper: Regression Analysis III Module: Linear Mixed Model"

Copied!
32
0
0

Loading.... (view fulltext now)

Full text

(1)

Subject: Statistics

Paper: Regression Analysis III Module: Linear Mixed Model

1 / 13

(2)

Development Team

Principal investigator: Dr. Bhaswati Ganguli,Professor, Department of Statistics, University of Calcutta

Paper co-ordinator: Dr. Bhaswati Ganguli,Professor, Department of Statistics, University of Calcutta

Content writer: Sayantee Jana, Graduate student, Department of Mathematics and Statistics, McMaster University Sujit Kumar Ray,Analytics professional, Kolkata

Content reviewer: Department of Statistics, University of Calcutta

2 / 13

(3)

Linear Mixed Model (LMM) :

I Linear Mixed Model : y

¯=xβ

¯+zu

¯+

¯ E(¯) =0

¯ andD(

¯) = Ω β

¯ = fixed effect u¯ = random effect

x ,z → Known design matrices.

u¯∼N(0

¯, D)

3 / 13

(4)

Linear Mixed Model (LMM) :

I E(y

¯

|u¯) =xβ

¯+zu

¯ V ar(y

¯

|u¯) =σ2In

E(y¯) =xβ

¯ V ar(y

¯) =σ2In+z0Dz [V ar(y

¯)depends only onz not onx or β if D( ¯

¯) =R, then V ar(y

¯) =R+z0Dz]

I advantages : q(q+1)2 = no. of parameters. q << n

I disadvantages : dimension

4 / 13

(5)

Linear Mixed Model (LMM) :

I link : η=g(µ) η =xβ

¯+zu

¯

µ=g−1(η) =g−1(xβ

¯+zu

¯) E(y) =E(µ) =E[g−1(xβ

¯+zu

¯)]

V ar(y) =V ar(µ) +a(φ)E[V(µ)]

5 / 13

(6)

Linear Mixed Model (LMM) :

I Example : loglink : g(µ) =log(µ)

Conditional distribution of Y|u∼P(µ)⇒V ar(Y|u) =µ η :g(µ)⇒log(µ) =xβ

¯+zu

¯

⇒µ=exβ

¯

+zu

¯ E(Y) =E[exβ

¯

+zu

¯]

=exβ

¯E(ezu

¯)

=exβ

¯µu(z)

6 / 13

(7)

Best Linear Unbiased Estimator (BLUP)

I BLUP is a method of estimating random effects in a model.

I P in BLUP stands for ‘predictor’ because conventionally, estimators for random effects are called predictors and estimators of fixed effects are called estimators.

I BLUP is called ‘best’ because it has the minimum MSE among all linearly unbiased estimators.

I The BLUPs are the solutions of the mixed model equations by Henderson.

I When the covariance of the random effects, u

¯ tends to zero Henderson’s mixed model equations tend formally to the GLSE for estimatingβ

¯ and u

¯and u

¯is regarded as fixed effects.

7 / 13

(8)

Best Linear Unbiased Estimator (BLUP)

I BLUP is a method of estimating random effects in a model.

I P in BLUP stands for ‘predictor’ because conventionally, estimators for random effects are called predictors and estimators of fixed effects are called estimators.

I BLUP is called ‘best’ because it has the minimum MSE among all linearly unbiased estimators.

I The BLUPs are the solutions of the mixed model equations by Henderson.

I When the covariance of the random effects, u

¯ tends to zero Henderson’s mixed model equations tend formally to the GLSE for estimatingβ

¯ and u

¯and u

¯is regarded as fixed effects.

7 / 13

(9)

Best Linear Unbiased Estimator (BLUP)

I BLUP is a method of estimating random effects in a model.

I P in BLUP stands for ‘predictor’ because conventionally, estimators for random effects are called predictors and estimators of fixed effects are called estimators.

I BLUP is called ‘best’ because it has the minimum MSE among all linearly unbiased estimators.

I The BLUPs are the solutions of the mixed model equations by Henderson.

I When the covariance of the random effects, u

¯ tends to zero Henderson’s mixed model equations tend formally to the GLSE for estimatingβ

¯ and u

¯and u

¯is regarded as fixed effects.

7 / 13

(10)

Best Linear Unbiased Estimator (BLUP)

I BLUP is a method of estimating random effects in a model.

I P in BLUP stands for ‘predictor’ because conventionally, estimators for random effects are called predictors and estimators of fixed effects are called estimators.

I BLUP is called ‘best’ because it has the minimum MSE among all linearly unbiased estimators.

I The BLUPs are the solutions of the mixed model equations by Henderson.

I When the covariance of the random effects, u

¯ tends to zero Henderson’s mixed model equations tend formally to the GLSE for estimatingβ

¯ and u

¯and u

¯is regarded as fixed effects.

7 / 13

(11)

Best Linear Unbiased Estimator (BLUP)

I BLUP is a method of estimating random effects in a model.

I P in BLUP stands for ‘predictor’ because conventionally, estimators for random effects are called predictors and estimators of fixed effects are called estimators.

I BLUP is called ‘best’ because it has the minimum MSE among all linearly unbiased estimators.

I The BLUPs are the solutions of the mixed model equations by Henderson.

I When the covariance of the random effects, u

¯ tends to zero Henderson’s mixed model equations tend formally to the GLSE for estimatingβ

¯ and u

¯and u

¯is regarded as fixed effects.

7 / 13

(12)

Methods of deriving BLUP

I BLUP can be derived in many different ways and it is robust.

I Here we introduce four methods of deriving BLUP.

I Henderson’s justification

I Bayesian derivation.

I ClassicalSchool.

I Goldberger’s derivation

8 / 13

(13)

Methods of deriving BLUP

I BLUP can be derived in many different ways and it is robust.

I Here we introduce four methods of deriving BLUP.

I Henderson’s justification

I Bayesian derivation.

I ClassicalSchool.

I Goldberger’s derivation

8 / 13

(14)

Methods of deriving BLUP

I BLUP can be derived in many different ways and it is robust.

I Here we introduce four methods of deriving BLUP.

I Henderson’s justification

I Bayesian derivation.

I ClassicalSchool.

I Goldberger’s derivation

8 / 13

(15)

Henderson’s justification

I He described the BLUPs as “joint maximum likelihood estimates”.

I He assumed that both u

¯ and

¯are normally distributed.

I He maximized the joint density of y

¯ and u

¯ with respect toβ and u ¯

¯to obtain the BLUPs

9 / 13

(16)

Henderson’s justification

I He described the BLUPs as “joint maximum likelihood estimates”.

I He assumed that both u

¯ and

¯are normally distributed.

I He maximized the joint density of y

¯ and u

¯ with respect toβ and u ¯

¯to obtain the BLUPs

9 / 13

(17)

Henderson’s justification

I He described the BLUPs as “joint maximum likelihood estimates”.

I He assumed that both u

¯ and

¯are normally distributed.

I He maximized the joint density of y

¯ and u

¯ with respect toβ and u ¯

¯to obtain the BLUPs

9 / 13

(18)

Bayesian derivation

I Assume thatβ

¯ has a uniform improper prior distribution.

I Also assume that both u

¯ has a prior distribution with mean 0 and positive variance independent ofβ

¯.

I Then the posterior mode is the BLUP.

10 / 13

(19)

Bayesian derivation

I Assume thatβ

¯ has a uniform improper prior distribution.

I Also assume that both u

¯ has a prior distribution with mean 0 and positive variance independent ofβ

¯.

I Then the posterior mode is the BLUP.

10 / 13

(20)

Bayesian derivation

I Assume thatβ

¯ has a uniform improper prior distribution.

I Also assume that both u

¯ has a prior distribution with mean 0 and positive variance independent ofβ

¯.

I Then the posterior mode is the BLUP.

10 / 13

(21)

Classical school

I This method uses the estimation of residuals form a simple normal model.

I Properties of the residuals as estimators of the unkown errors.

I linear

I unbiased

I possesses minimum MSE amongst the class of LUEs

11 / 13

(22)

Classical school

I This method uses the estimation of residuals form a simple normal model.

I Properties of the residuals as estimators of the unkown errors.

I linear

I unbiased

I possesses minimum MSE amongst the class of LUEs

11 / 13

(23)

Classical school

I This method uses the estimation of residuals form a simple normal model.

I Properties of the residuals as estimators of the unkown errors.

I linear

I unbiased

I possesses minimum MSE amongst the class of LUEs

11 / 13

(24)

Goldberger’s Derivation

I He assume that linear model - y=xβ

¯+ where the disturbances ¯

¯satisfies E(

¯) = 0 andV ar(

¯) =Ω.

I He derived the best linear unbiased predictor of the future observation given a new observable vector of regressors and unobservable prediction of disturbances.

I He was the first to coin the term.

I Henderson was the first to use the acronym BLUP in 1973.

12 / 13

(25)

Goldberger’s Derivation

I He assume that linear model - y=xβ

¯+ where the disturbances ¯

¯satisfies E(

¯) = 0 andV ar(

¯) =Ω.

I He derived the best linear unbiased predictor of the future observation given a new observable vector of regressors and unobservable prediction of disturbances.

I He was the first to coin the term.

I Henderson was the first to use the acronym BLUP in 1973.

12 / 13

(26)

Goldberger’s Derivation

I He assume that linear model - y=xβ

¯+ where the disturbances ¯

¯satisfies E(

¯) = 0 andV ar(

¯) =Ω.

I He derived the best linear unbiased predictor of the future observation given a new observable vector of regressors and unobservable prediction of disturbances.

I He was the first to coin the term.

I Henderson was the first to use the acronym BLUP in 1973.

12 / 13

(27)

Goldberger’s Derivation

I He assume that linear model - y=xβ

¯+ where the disturbances ¯

¯satisfies E(

¯) = 0 andV ar(

¯) =Ω.

I He derived the best linear unbiased predictor of the future observation given a new observable vector of regressors and unobservable prediction of disturbances.

I He was the first to coin the term.

I Henderson was the first to use the acronym BLUP in 1973.

12 / 13

(28)

Summary

I Mixed models have a fixed effect and a random effect.

I The estimator for random effects in a model is called BLUP.

I The BLUPs are the solutions of the mixed model equations by Henderson.

I There are several methods of deriving BLUP.

I For further details the students are refered to Robinson’s paper “The BLUP is a good thing”.

13 / 13

(29)

Summary

I Mixed models have a fixed effect and a random effect.

I The estimator for random effects in a model is called BLUP.

I The BLUPs are the solutions of the mixed model equations by Henderson.

I There are several methods of deriving BLUP.

I For further details the students are refered to Robinson’s paper “The BLUP is a good thing”.

13 / 13

(30)

Summary

I Mixed models have a fixed effect and a random effect.

I The estimator for random effects in a model is called BLUP.

I The BLUPs are the solutions of the mixed model equations by Henderson.

I There are several methods of deriving BLUP.

I For further details the students are refered to Robinson’s paper “The BLUP is a good thing”.

13 / 13

(31)

Summary

I Mixed models have a fixed effect and a random effect.

I The estimator for random effects in a model is called BLUP.

I The BLUPs are the solutions of the mixed model equations by Henderson.

I There are several methods of deriving BLUP.

I For further details the students are refered to Robinson’s paper “The BLUP is a good thing”.

13 / 13

(32)

Summary

I Mixed models have a fixed effect and a random effect.

I The estimator for random effects in a model is called BLUP.

I The BLUPs are the solutions of the mixed model equations by Henderson.

I There are several methods of deriving BLUP.

I For further details the students are refered to Robinson’s paper “The BLUP is a good thing”.

13 / 13

References

Related documents

◮ Some passes are called multiple times in different contexts Conditional constant propagation and dead code elimination are called thrice. ◮ Some passes are enabled for

the emitter follower which is a current amplifier but has no voltage gain, Current Amplification Factor of CC Configuration.

In this paper, the performance of the process error estimators of the parameters of the two non- equilibrium formulations of the Schaefer's production model in the presence of

 In Gryllus bimaculatus, the solitary phase is distinguished by the dark colouration of the body and reduced wings but the gregarious phase can be identified by the

Gupta and Kundu (1999) compared the maximum likelihood estimators (MLE) with the other estimators such as the method of moments estimators (MME), estimators

In Chapter 3, the problem of estimation of parameters of exponential and normal distribution is considered and various estimators are obtained.. Further, in Chapter 4 the problem

Hamid Al-Hemyari, Research Scholar, Mathematics Department to the Indian Institute of Technology, Delhi, for the award oc the DEGREE OF DOCTOR OF PHILOSOPHY IN MATHEMATICS, is

Many other resampling techniques like the Bayesian bootstrap, the m out of n bootstrap, the delete-d jackknives are also special cases, and so the framework of generalised