LIMITING SPECTRAL DISTRIBUTION OF CIRCULANT MATRIX WITH DEPENDENT ENTRIES

ARUP BOSE AND KOUSHIK SAHA^{∗}

This article is subsumed in the Technical Report R6/2009, Stat-Math Unit.

Abstract. In this article, we derive the limiting spectral distribution of the*circulant matrix*
when the input sequence is a stationary infinite order two sided moving average process.

Keywords: Large dimensional random matrix, eigenvalues, circulant matrix, empirical spectral distribution, limiting spectral distribution, moving average process, convergence in distribution, convergence in probability, normal approximation.

AMS 2000 Subject Classification. 60F99, 62E20, 60G57.

1. Introduction and Main result

Suppose *λ*_{1}*, λ*_{2}*, ..., λ** _{n}* are all the eigenvalues of a square matrix

*A*

*of order*

_{n}*n. Then the*

*empirical spectral distribution function (ESDF)*of

*A*

*is defined as*

_{n}*F** _{n}*(x, y) =

*n*

*X*

^{−1}*n*

*i=1*

*I{Reλ*_{i}*≤x, Imλ*_{i}*≤y}.*

Let *{A*_{n}*}*^{∞}* _{n=1}* be a sequence of square matrices with the corresponding ESDF

*{F*

_{n}*}*

^{∞}*. The Limiting Spectral Distribution (or measure) (LSD) of the sequence is defined as the weak limit of the sequence*

_{n=1}*{F*

_{n}*}*

^{∞}*, if it exists.*

_{n=1}If*{A*_{n}*}*are random, the limit is understood to be in some probabilistic sense, such as “almost
surely” or “in probability”. Suppose elements of *{A*_{n}*}* are defined on some probability space
(Ω,*F, P*), that is *{A*_{n}*}* are random. Let *F* be a nonrandom distribution function. We say
the ESD of *A** _{n}* converges to the

*limiting spectral distribution*(LSD)

*F*in

*L*

_{2}if at all continuty points (x, y) of

*F*,

Z

*ω*

¡*F** _{n}*(x, y)

*−F(x, y)*¢

_{2}

*dP*(ω)*→*0 as *n→ ∞*

and converges in probability to*F* if for every*² >*0 and at all continuty points (x, y) of*F*,
*P*¡

*|F** _{n}*(x, y)

*−F*(x, y)|

*> ²*¢

*→*0 as *n→ ∞.*

*∗*Research supported by CSIR Fellowship, Dept. of Science and Technology, Govt. of India.

1

For detailed information on limiting spectral distributions of large dimensional random matrices see [Bai(1999)] and also [Bose and Sen(2008)].

In this article we focus on obtaining the LSD of the*circulant matrix* (C* _{n}*) given by

*C** _{n}* =

^{√}^{1}

_{n}

*x*_{0} *x*_{1} *x*_{2} *. . . x*_{n−2}*x*_{n−1}*x*_{n−1}*x*_{0} *x*_{1} *. . . x*_{n−3}*x*_{n−2}*x*_{n−2}*x*_{n−1}*x*_{0} *. . . x*_{n−4}*x*_{n−3}

...

*x*_{1} *x*_{2} *x*_{3} *. . . x*_{n−1}*x*_{0}

*.*

So, the (i, j)th element of the matrix is *x*(j−i+n)mod n. The eigenvalues are given by (see for
example [Brockwell and Davis(2002)]),

*λ** _{k}* = 1

*√n*

*n−1*X

*l=0*

*x*_{l}*e*^{iω}^{k}* ^{l}*=

*b*

*+*

_{k}*ic*

_{k}*∀k*= 1,2,

*· · ·, n,*where

*ω** _{k}* = 2πk

*n* *, b** _{k}*= 1

*√n*

*n−1*X

*l=0*

*x** _{l}*cos(ω

_{k}*l), c*

*= 1*

_{k}*√n*

*n−1*X

*l=0*

*x** _{l}*sin(ω

_{k}*l).*

The existence of the LSD of*C** _{n}* is given by the following theorem of [Bose and Mitra(2002)].

*Theorem* 1.1. Let*{x*_{i}*}*be a sequence of independent random variables with mean 0 and variance
1 and sup_{i}*E* *|x*_{i}*|*^{3}*<∞. Then the ESD ofC** _{n}* converges in

*L*

_{2}to the two-dimensional normal distribution given by

*N*

_{2}(0, D) where

*D*is a diagonal matrix with diagonal entries 1/2.

We investigate the existence of LSD of*C** _{n}* under a dependent situation. Let

*{x*

*;*

_{n}*n≥*0} be a two sided moving average process,

*x** _{n}*=
X

*∞*

*i=−∞*

*a*_{i}*²*_{n−i}

where *{a** _{n}*;

*n*

*∈*Z} ∈

*l*

_{1}, that is P

*n**|a*_{n}*|* *<∞, are nonrandom and* *{²** _{i}*;

*i*

*∈*Z} are iid random variables with mean zero and variance one. We show that the LSD of

*C*

*continues to exist in this dependent situation. Define*

_{n}*γ*

*=*

_{h}*Cov*(x

_{t+h}*, x*

*). Then it is easy to see thatP*

_{t}*j∈Z**|γ*_{j}*|<∞*
and the*spectral density function* of*{x*_{n}*}* is given by

*f*(ω) = 1
2π

X

*k∈Z*

*γ** _{k}*exp(ikω) = 1
2π

£*γ*_{0}+ 2X

*k≥1*

*γ** _{k}*cos(kω)¤

for *ω∈*[0,2π].

Let*f** ^{∗}* = inf

_{ω∈[0,2π]}*f*(ω) and

*C*

_{0}=

*{ω∈*[0,2π];

*f*(ω) = 0}. For

*k*= 1,2,

*· · ·, n, define*

*ξ*

_{2k−1}= 1

*√n*

*n−1*X

*t=0*

*²** _{t}*cos(ω

_{k}*t), ξ*

_{2k}= 1

*√n*

*n−1*X

*t=0*

*²** _{t}*sin(ω

_{k}*t).*

Define

*B(ω) =*

µ *a*_{1}(e* ^{iω}*)

*−a*

_{2}(e

*)*

^{iω}*a*

_{2}(e

*)*

^{iω}*a*

_{1}(e

*)*

^{iω}¶
*,*

where*a*_{1}(e* ^{iω}*) =

*R[a(e*

*)], a*

^{iω}_{2}(e

*) =*

^{iω}*I[a(e*

*)],*

^{iω}*a(e*

*) is same as defined in Lemma 1.3 and for*

^{iω}*z∈*C,

*R(z),I(z) denote the real and imaginary part of*

*z*respectively. It is easy to see that

*|a(e** ^{iω}*)|

^{2}=

*a*

_{1}(e

*)*

^{iω}^{2}+

*a*

_{2}(e

*)*

^{iω}^{2}= 2πf(ω).

Define for (x, y)*∈*R^{2} and *ω∈*[0,2π],
*H(ω, x, y) =*

½ *P*¡

*B*(ω)(N_{1}*, N*_{2})^{0}*≤√*

2(x, y)* ^{0}*¢

if *f*(ω)*6= 0,*
I(x*≥*0, y *≥*0) if *f*(ω) = 0.

Since *a(e** ^{iω}*) is continuous on [0,2π], it is easy to verify that for fixed (x, y),

*H*is bounded continuous function in

*ω. Hence we may define*

*F*(x, y) =
Z _{1}

0

*H(2πs, x, y)ds.*

*F* is a proper distribution function.

For any Borel set *B, let* *λ(B) denote the corresponding Lebesgue measure. It is easy to see*
that

(i) if *λ(C*_{0}) = 0 then*F* is continuous everywhere and,

(ii) if*λ(C*_{0})*6= 0 thenF* is discontinuous *only* on *D*_{1}=*{(x, y) :xy* = 0}.

*Theorem* 1.2. Suppose *{²*_{i}*}* are iid with *E|²*_{i}*|*^{(2+δ)} *<∞. Then the ESD of* *C** _{n}* converges in

*L*

_{2}to the LSD

*F*(x, y) =
Z _{1}

0

*H(2πs, x, y)ds,*
and if *λ(C*_{0}) = 0 we have

*F*(x, y) =
Z Z

I_{{(v}_{1}_{,v}_{2}_{)≤(x,y)}}

h Z 1 0

I_{{f}_{(2πs)6=0}} 1

2π^{2}*f(2πs)e*^{−}

*v*2
1 +*v*2
2πf(2πs)2 *ds*

i

*dv*_{1}*dv*_{2}*.*

*Remark* 1.1. If inf_{ω∈[0,2π]}*f*(ω)*>*0, we can write *F** _{g}* in the following form

*F(x, y) =*

Z Z

I_{{(v}_{1}_{,v}_{2}_{)≤(x,y)}}

h Z 1 0

1

2π^{2}*f*(2πs)*e*^{−}

*v*2
1 +*v*2

2
2πf(2πs)*ds*

i

*dv*_{1}*dv*_{2}*.*

*Remark* 1.2. If *{x*_{i}*}* are i.i.d, then *f(ω) = 1/2π* for all *ω* *∈* [0,2π] and the LSD is standard
complex normal distribution. This agrees with Theorem 1.1.

Proof of the theorem mainly depends on following two lemmas. Lemma 1.3 follows from [Fan and Yao(2003)] (Theorem 2.14(ii), page 63). For completeness, we have provided a proof.

The proof of Lemma 1.4 follows easily from [Bhattacharya and Ranga Rao(1976)] (Corollary 18.3, page 184). We omit the details.

*Lemma* 1.3. Let*x** _{t}*=P

_{∞}*j=−∞**a*_{t}*²** _{t−j}* for

*t≥*0, where

*{²*

_{t}*}*are i.i.d random variables with mean 0, variance 1 and P

_{∞}*j=−∞**|a*_{j}*|<∞. Then for* *k*= 1,2,*· · ·, n,*
*λ** _{k}*=

*a(e*

^{iω}*)[ξ*

^{k}_{2k−1}+

*iξ*

_{2k}] +

*Y*

*(ω*

_{n}*), where*

_{k}*a(e*

^{iω}*) =P*

^{k}

_{∞}*j=−∞**a*_{j}*e*^{iω}^{k}* ^{j}* and max

_{0≤k<n}

*E|Y*

*(ω*

_{n}*)|*

_{k}^{2}

*→*0 as

*n→ ∞.*

*Proof.*

*λ** _{k}* = 1

*√n*

*n−1*X

*t=0*

*x*_{t}*e*^{iω}^{k}^{t}

= 1

*√n*
X*∞*

*j=−∞*

*a*_{j}*e*^{iω}^{k}^{j}

*n−1*X

*t=0*

*²*_{t−j}*e*^{iω}^{k}^{(t−j)}

= 1

*√n*
X*∞*

*j=−∞*

*a*_{j}*e*^{iω}^{k}* ^{j}*
Ã

*n−1*X

*t=0*

*²*_{t}*e*^{iω}^{k}* ^{t}*+

*U*

_{nj}!

= *a(e*^{iω}* ^{k}*)[ξ

_{2k−1}+

*iξ*

_{2k}] +

*Y*

*(ω*

_{n}*),*

_{k}where
*a(e*^{iω}* ^{k}*) =

X*∞*

*j=−∞*

*a*_{j}*e*^{iω}^{k}^{j}*, U** _{nj}*=

*n−1−j*X

*t=−j*

*²*_{t}*e*^{iω}^{k}^{t}*−*

*n−1*X

*t=0*

*²*_{t}*e*^{iω}^{k}^{t}*, Y** _{n}*(ω

*) =*

_{k}*n*

*X*

^{−1/2}*∞*

*j=−∞*

*a*_{j}*e*^{iω}^{k}^{j}*U*_{nj}*.*

Note that if*|j|< n,U** _{nj}* is a sum of 2|j|independent random variables, whereas if

*|j| ≥n,U*

*is a sum of 2nindependent random variables. Thus*

_{nj}*E|U*

_{nj}*|*

^{2}

*≤*2 min(|j|, n). Therefore, for any fixed positive integer

*l*and

*n > l,*

*E|Y** _{n}*(ω

*)|*

_{k}^{2}

*≤*1

*n*

X^{∞}

*j=−∞*

*|a*_{j}*|(EU*_{nj}^{2} )^{1/2}

2 ¡

∵
X*∞*

*−∞*

*|a*_{j}*|<∞*¢

*≤* 2
*n*

X^{∞}

*j=−∞*

*|a*_{j}*|{min(|j|, n)}*^{1/2}

2

*≤* 2

1

*√n*
X

*|j|≤l*

*|a*_{j}*||j|*^{1/2}+X

*|j|>l*

*|a*_{j}*|*

2

*.*

Note that the right-hand side of the above expression is independent of *k* and as *n* *→ ∞,*
it can be made smaller than any given positive constant by choosing *l* large enough. Hence,

max_{1≤k≤n}*E|Y** _{n}*(ω

*)|*

_{k}^{2}

*→*0. ¤

*Lemma* 1.4. Let *X*_{1}*, . . . , X** _{k}* be independent random vectors with values in R

*, having zero means and an average positive-definite covariance matrix*

^{d}*V*

*=*

_{k}*k*

*P*

^{−1}

_{k}*j=1**CovX** _{j}*. Let

*G*

_{k}denote the distribution of *k*^{−1/2}*T** _{k}*(X

_{1}+

*. . .*+

*X*

*), where*

_{k}*T*

*is the symmetric, positive-definite matrix satisfying*

_{k}*T*

_{k}^{2}=

*V*

_{k}*,*

^{−1}*n≥*1. If for some

*δ >*0,

*E*

*kX*

_{j}*k*

^{(2+δ)}

*<∞, then*

sup

*C∈C*

*|G** _{k}*(C)

*−*Φ

_{0,I}(C)| ≤

*ck*

*£*

^{−δ/2}*k*

^{−1}X*k*

*j=1*

*E* *kT*_{k}*X*_{j}*k*^{(2+δ)}¤

*≤* *ck** ^{−δ/2}*(λ

_{min}(V

*))*

_{k}*£*

^{−(2+δ)}*k*

^{−1}X*k*

*j=1*

*E* *kX*_{j}*k*^{(2+δ)}¤

where Φ_{0,I} is the normal probability function with mean zero and identity covariance matrix,
*C, the class of all Borel-measurable convex subsets of* R* ^{d}* and

*c*is a constant, depending only on

*d.*

*Proof of Theorem 1.2:* We first assume *λ(C*_{0}) = 0. To prove the theorem it suffices to show
that for each*x, y∈*R,

(1.1) *E(F** _{n}*(x, y))

*→F(x, y) and*

*V*(F

*(x, y))*

_{n}*→*0.

Note that we may ignore the eigenvalue*λ** _{n}*and also

*λ*

*whenever*

_{n/2}*n*is even since they contribute atmost 2/nto the ESD

*F*

*(x, y). So for*

_{n}*x, y∈*R,

*E[F** _{n}*(x, y)] ∼

*n*

^{−1}*n−1*X

*k=1,(k6=n/2)*

*P*(b_{k}*≤x, c*_{k}*≤y).*

Define for *k*= 1,2,*· · ·* *, n,*

*η** _{k}*= (ξ

_{2k−1}

*, ξ*

_{2k})

^{0}*, Y*

_{1n}(ω

*) =*

_{k}*R[Y*

*(ω*

_{n}*)], Y*

_{k}_{2n}(ω

*) =*

_{k}*I[Y*

*(ω*

_{n}*)],*

_{k}*A** _{k}*=

µ *a*_{1}(e^{iω}* ^{k}*)

*−a*

_{2}(e

^{iω}*)*

^{k}*a*

_{2}(e

^{iω}*)*

^{k}*a*

_{1}(e

^{iω}*)*

^{k}¶
*,*

where *a(e*^{iω}* ^{k}*), Y

*(ω*

_{n}*) are same as defined in Lemma 1.3. Then (b*

_{k}

_{k}*, c*

*)*

_{k}*=*

^{0}*A*

_{k}*η*

*+ (Y*

_{k}_{1n}(ω

*), Y*

_{k}_{2n}(ω

*))*

_{k}

^{0}*.*From Lemma 1.3, it is intutively clear that for large

*n, λ*

_{k}*∼a(e*

^{iω}*)[ξ*

^{k}_{2k−1}+

*iξ*

_{2k}]. So first we show that for large

*n*

1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P*(b_{k}*≤x, c*_{k}*≤y)∼* 1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P*(A_{k}*η*_{k}*≤*(x, y)* ^{0}*).

Note

¯¯

¯1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P(b*_{k}*≤x, c*_{k}*≤y)−* 1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P*(A_{k}*η*_{k}*≤*(x, y)* ^{0}*)

¯¯

¯

=

¯¯

¯1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P(A*_{k}*η** _{k}*+ (Y

_{1n}(ω

*), Y*

_{k}_{2n}(ω

*))*

_{k}

^{0}*≤*(x, y)

*)*

^{0}*−P*(A

_{k}*η*

_{k}*≤*(x, y)

*)*

^{0}¯¯

¯

*≤* 1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P*((|Y_{1n}(ω* _{k}*)|,

*|Y*

_{2n}(ω

*)|)*

_{k}*>*(², ²))

+

¯¯

¯1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P(A*_{k}*η*_{k}*≤*(x, y)^{0}*,*(|Y_{1n}(ω* _{k}*)|,

*|Y*

_{2n}(ω

*)|)*

_{k}*≤*(², ²))

*−P*(A

_{k}*η*

_{k}*≤*(x, y)

*)*

^{0}¯¯

¯

= *T*_{1}+*T*_{2}*,* say.

Now using Lemma 1.3, as *n→ ∞*
*T*_{1}*≤* 1

*n*

*n−1*X

*k=1,(k6=n/2)*

*P*(|Y* _{n}*(ω

*)|*

_{k}^{2}

*>*2²

^{2})

*≤*1 2²

^{2}sup

*k*

*E|Y** _{n}*(ω

*)|*

_{k}^{2}

*→*0.

*T*_{2} *≤* max
n¯¯

¯1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P*(A_{k}*η*_{k}*≤*(x+*², y*+*²)*^{0}*−P*(A_{k}*η*_{k}*≤*(x, y)* ^{0}*)

¯¯

¯,

¯¯

¯1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P*(A_{k}*η*_{k}*≤*(x*−², y−²)*^{0}*−P*(A_{k}*η*_{k}*≤*(x, y)* ^{0}*)

¯¯

¯ o

and ¯

¯¯1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P*(A_{k}*η*_{k}*≤*(x+*², y*+*²)*^{0}*−P*(A_{k}*η*_{k}*≤*(x, y)* ^{0}*)

¯¯

¯*≤T*_{3}+*T*_{4}+*T*_{5}*.*
where

*T*_{3}=

¯¯

¯1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P(A*_{k}*ξ*_{k}*≤*(x, y)* ^{0}*)

*−P*(A

*(N*

_{k}_{1}

*N*

_{2})

^{0}*≤*(

*√*2x,

*√*

2y)* ^{0}*)

¯¯

¯,

*T*_{4}=

¯¯

¯1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P(A*_{k}*ξ*_{k}*≤*(x+*², y*+*²)** ^{0}*)

*−P*(A

*(N*

_{k}_{1}

*N*

_{2})

^{0}*≤*(

*√*

2x+*√*
2²,*√*

2y+*√*
2²)* ^{0}*)

¯¯

¯,

*T*_{5}=

¯¯

¯1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P(A** _{k}*(N

_{1}

*N*

_{2})

^{0}*≤*(

*√*2x+

*√*

2²,*√*
2y+*√*

2²)* ^{0}*)−P(A

*(N*

_{k}_{1}

*N*

_{2})

^{0}*≤*(

*√*2x,

*√*

2y)* ^{0}*)

¯¯

¯.

To show*T*_{3}*, T*_{4}*→*0 define for*k*= 1,2,*· · ·* *, n−*1,(except for*k*=*n/2) andl*= 0,1,2,*· · ·, n−1,*
*X** _{l,k}* = (

*√*

2²* _{l}*cos(ω

_{k}*l),*

*√*

2²* _{l}*sin(ω

_{k}*l))*

^{0}*.*Note that

*E(X** _{l,k}*) = 0

*∀*

*l, k, n.*

(1.2)

*n*^{−1}

*n−1*X

*l=0*

*Cov(X** _{l,k}*) =

*I*

*∀*

*k, n.*

(1.3)

Note that for*k6=n/2*

*{A*_{k}*η*_{k}*≤*(x, y)^{0}*}*=*{A** _{k}*(n

^{−1/2}*n−1*X

*l=0*

*X** _{l,k}*)

*≤*(

*√*2x,

*√*

2y)^{0}*}.*

Since *{(r, s) :A** _{k}*(r, s)

^{0}*≤*(

*√*2x,

*√*

2y)^{0}*}* is a convex set in R^{2} and *{X*_{l,k}*, l*= 0,1, . . .(n*−*1)}

satisfies (1.2) and (1.3), we can apply Lemma 1.4 for *k6=n/2 to get*

¯¯*P*(A* _{k}*(n

^{−1/2}*n−1*X

*l=0*

*X** _{l,k}*)

*≤*(

*√*2x,

*√*

2y)* ^{0}*)−P(A

*(N*

_{k}_{1}

*, N*

_{2})

^{0}*≤*(

*√*2x,

*√*

2y)* ^{0}*)¯¯

*≤cn*

*[n*

^{−δ/2}

^{−1}*n−1*X

*l=0*

*E* *kX*_{lk}*k*^{(2+δ)}],
where*N*_{1}*, N*_{2} are independent standard normal variates. Note that

sup

1≤k≤n

[n^{−1}

*n−1*X

*l=0*

*EkX*_{lk}*k*^{(2+δ)}]*≤M <∞*
and, as*n→ ∞*

1
*n*

*n−1*X

*k=1,(k6=n/2)*

¯¯*P*(A* _{k}*(n

^{−1/2}*n−1*X

*l=0*

*X** _{l,k}*)

*≤*(

*√*2x,

*√*

2y)* ^{0}*)−P(A

*(N*

_{k}_{1}

*, N*

_{2})

^{0}*≤*(

*√*2x,

*√*

2y)* ^{0}*)¯

¯*≤cM n*^{−δ/2}*→*0.

Hence *T*_{3} *→*0 and similarly*T*_{4}*→*0. and also

*n→∞*lim
1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P*(A_{k}*η*_{k}*≤*(x, y)* ^{0}*) = lim

*n→∞*

1
*n*

*n−1*X

*k=1,(k6=n/2)*

*H(*2πk
*n* *, x, y)*

=
Z _{1}

0

*H(2πs, x, y)ds.*

Therefore

*n→∞*lim *T*_{5} =

¯¯

¯
Z _{1}

0

*H(2πs, x*+*², y*+*²)ds−*
Z _{1}

0

*H(2πs, x, y)ds*

¯¯

¯

*≤*
Z _{1}

0

¯¯*H(2πs, x*+*², y*+*²)ds−H(2πs, x, y)*¯

¯*ds.*

Note that

¯¯*H(2πs, x*+*², y*+*²)ds−H(2πs, x, y)*¯¯*≤*2
and for fixed (x, y)*∈*R^{2} as*²→*0,

(1.4) ¯

¯*H(2πs, x*+*², y*+*²)ds−H(2πs, x, y)*¯

¯*→*0.

Hence by DCT lim* _{²→0}*lim

_{n→∞}*T*

_{5}= 0 and

*²→0*lim lim

*n→∞*

¯¯

¯1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P*(A_{k}*η*_{k}*≤*(x+*², y*+*²)*^{0}*−P*(A_{k}*η*_{k}*≤*(x, y)* ^{0}*)

¯¯

¯= 0.

Also note that for fixed (x, y) as *²→*0,

(1.5) ¯

¯*H(2πs, x−², y−²)ds−H(2πs, x, y)*¯

¯*→*0,

outside the measure zero set *C*_{0}. Using this fact, proceeding as above we can show that

*²→0*lim lim

*n→∞*

¯¯

¯1
*n*

*n−1*X

*k=1,(k6=n/2)*

*P*(A_{k}*η*_{k}*≤*(x*−², y−²)*^{0}*−P*(A_{k}*η*_{k}*≤*(x, y)* ^{0}*)

¯¯

¯= 0,
and hence lim* _{²→0}*lim

_{n→∞}*T*

_{2}= 0. Therefore as

*n→ ∞,*

*E[F** _{n}*(x, y)]

*∼*1

*n*

*n−1*X

*k=1,(k6=n/2)*

*P*(A_{k}*η*_{k}*≤*(x, y)* ^{0}*)

*→*Z

_{1}

0

*H(2πs, x, y)ds,*

and since *λ(C*_{0}) = 0, we have
Z _{1}

0

*H(2πs, x, y)ds* =
Z _{1}

0

I*{f(2πs)6=0}**H(2πs, x, y)ds*

=
Z _{1}

0

I*{f(2πs)6=0}*

h Z Z

I_{{B(2πs)(u}_{1}_{,u}_{2}_{)}^{0}_{≤(x,y)}^{0}* _{}}* 1
2π

*e*

^{−}

^{u}21+*u*2

2 2*du*_{1}*du*_{2}
i

*ds*

=
Z _{1}

0

I*{f(2πs)6=0}*

h Z Z

I_{{(v}_{1}_{,v}_{2}_{)≤(x,y)}} 1

2π^{2}*f*(2πs)*e*^{−}

*v*2
1 +*v*2

2πf(2πs)2 *dv*_{1}*dv*_{2}
i

*ds*

= Z Z

I_{{(v}_{1}_{,v}_{2}_{)≤(x,y)}}

h Z 1 0

I*{f(2πs)6=0}*

1

2π^{2}*f*(2πs)*e*^{−}

*v*2
1 +*v*2
2πf(2πs)2 *ds*

i
*dv*_{1}*dv*_{2}

= *F*(x, y).

Now, to show*V*[F* _{n}*(x, y)]

*→*0, it is enough to show that 1

*n*^{2}

X*n*

*k6=k** ^{0}*;k,k

*=1*

^{0}*Cov(J*_{k}*, J*_{k}* ^{0}*)

*→*0.

(1.6)

where for 1*≤k≤n,J** _{k}* is the indicator that

*{b*

_{k}*≤x, c*

_{k}*≤y}. Observe that*1

*n*^{2}

X*n*

*k6=k** ^{0}*;k,k

*=1*

^{0}*Cov(J*_{k}*, J*_{k}* ^{0}*) = 1

*n*

^{2}

X*n*

*k6=k** ^{0}*;k,k

*=1*

^{0}[E(J_{k}*, J*_{k}* ^{0}*)

*−E(J*

*)E(J*

_{k}

_{k}*)]*

^{0}*.*Now as

*n→ ∞,*

1
*n*^{2}

X*n*

*k6=k** ^{0}*;k,k

*=1*

^{0}*E(J** _{k}*)E(J

_{k}*) =¡1*

^{0}*n*

X*n*

*k=1*

*E(J** _{k}*)¢

_{2}

*−* 1
*n*^{2}

X*n*

*k=1*

(E(J* _{k}*))

^{2}

*→H(x, y)*

^{2}

*.*So to show (1.6), it is enough to show as

*n→ ∞,*

1
*n*^{2}

X*n*

*k6=k** ^{0}*;k,k

*=1*

^{0}*E(J*_{k}*, J*_{k}* ^{0}*)

*→H(x, y)*

^{2}

*.*Along the lines of the proof used to show

_{n}^{1}P

_{n}*k=1**P*(A* _{k}*(N

_{1}

*N*

_{2})

^{0}*≤*(

*√*2x,

*√*

2y)* ^{0}*)

*→*

*F*(x, y), one may now extend the vectors of two coordinates defined above to ones with four coordinates and proceed exactly as above to verify this. We omit the routine details.

When *λ(C*_{0})*6= 0, we have to show (1.1) only at continuty points ofF* and it is continuous on
complement of *D*_{2}. All the above steps except (1.4),(1.5) in the proof will go through for all
(x, y), but on complement of*D*(1.4),(1.5) also holds. Hence if *λ(C*_{0})*6= 0, we have our required*

LSD. This proves the Theorem. ¤

References

[Bai(1999)] Z. D. Bai. Methodologies in spectral analysis of large-dimensional random matrices, a review.*Statist.*

*Sinica, 9(3):611–677, 1999. ISSN 1017-0405. With comments by G. J. Rodgers and Jack W. Silverstein; and*
a rejoinder by the author.

[Bhattacharya and Ranga Rao(1976)] R. N. Bhattacharya and R. Ranga Rao.*Normal approximation and as-*
*ymptotic expansions. John Wiley & Sons, New York-London-Sydney, 1976. Wiley Series in Probability and*
Mathematical Statistics.

[Bose and Mitra(2002)] Arup Bose and Joydip Mitra. Limiting spectral distribution of a special circulant.*Statist.*

*Probab. Lett., 60(1):111–120, 2002. ISSN 0167-7152.*

[Bose and Sen(2008)] Arup Bose and Arnab Sen. Another look at the moment method for large dimensional
random matrices.*Electron. J. Probab., 13:no. 21, 588–628, 2008. ISSN 1083-6489.*

[Brockwell and Davis(2002)] Peter J. Brockwell and Richard A. Davis.*Introduction to time series and forecasting.*

Springer Texts in Statistics. Springer-Verlag, New York, second edition, 2002. ISBN 0-387-95351-5. With 1 CD-ROM (Windows).

[Fan and Yao(2003)] Jianqing Fan and Qiwei Yao.*Nonlinear time series. Springer Series in Statistics. Springer-*
Verlag, New York, 2003. ISBN 0-387-95170-9. Nonparametric and parametric methods.

(Arup Bose)Stat-Math Unit, Indian Statistical Institute, 203 B. T. Rd., Calcutta 700108, India, E-mail: abose@isical.ac.in, bosearu@gmail.com

(Koushik Saha) Stat-Math Unit, Indian Statistical Institute, 203 B. T. Rd., Calcutta 700108, India, E-mail: koushik r@isical.ac.in