• No results found

Random matrices and matrix models: The JNU lectures

N/A
N/A
Protected

Academic year: 2022

Share "Random matrices and matrix models: The JNU lectures"

Copied!
42
0
0

Loading.... (view fulltext now)

Full text

(1)

PRAMANA © Printed in India Vol. 48, No. 1,

--journal of January 1997

physics pp. 7-48

Random matrices and matrix models: The JNU lectures

M A D A N L A L M E H T A

School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110067, India

Permanent Address: Centre d'l~tudes Nucleaires de Saclay, 91191 Gif-sur-Yvette Cedex, France Abstract. A course of lectures was given at the Jawaharlal Nehru University and the Jamia Milia Islamia, New Delhi, during February-March 1996. The following notes were distributed to the audience before each lecture. These notes, which are sketchy and do not go in details, were meant to help students follow the standard literature on the subject. They are collected here (with the exercises!) in the hope that they might prove useful to a larger community of research workers.

Keywords. Random matrices; matrix ensembles; statistical properties of eigenvalue spectra.

PACS Nos 02.10; 5.45; 5.90

1. Introduction

Before one can consider the problem o f " r a n d o m matrices" one has to know a few preliminary things.

• Numbers: A set in which addition, sublxaction, multiplication and division can be performed with the usual rules of arithmetic, namely

(a) a d d i t i o n is c o m m u t a t i v e and a s s o c i a t i v e : x + y = y + x, x + (y + z) =

(z + y) + z;

(b) multiplication is associative: x ( y z ) = (xy)z;

(c) existence of a zero and of a unit: x + 0 = 0 + x = x; 1.x = x. 1 = x;

(d) multiplication is distributive over addition: x ( y + z) = x y + x z , (y + z ) x = y x + zx;

(e) existence of a negative and an inverse: x + ( - x ) = ( - x ) + x = O; x . x - 1 = x - l - x = 1 if x ~ O;

is called a number field, its elements are numbers.

• There are only three kinds o f numbers: real, complex ~ d quaternion (theorem o f Frobenius).

• Real and complex n u m b e r s - - a r e very familiar: the complex numbers are formed with two real numbers and one special unit i with i 2 = - 1 .

• Quatemions are not so familiar: they are formed with four real numbers and three units el, e2 and e3 with el 2 = e~ = e~ = ele2e3 = - 1 .

• Multiplication is commutative for real and complex numbers, but not for quatemions xy ¢ yx.

• Matrices are rectangular arrays of numbers. With a square matrix one can usually associate a number called its determinant. Matrices can be added or multiplied if their

(2)

number of rows and columns are appropriate. The values of A for which det(A - AI) is zero, are called eigenvalues of A. (Here I is the unit matrix). For an eigenvalue A of A the non-zero column matrix ac satisfying the equation Az = Az is called the eigenvector of A corresponding to the eigenvalue A.

• A square matrix A of order n x n has n eigenvalues and at most n eigenvectors.

• One can define many elementary operations on matrices such as transposition, hermitian conjugation and taking the dual, the result is denoted respectively by A T, A t and A D. We call A symmetric if A = A T, hermitian if A = A t, self-dual if A = A D, orthogonal if A A r = I , unitary if A A t = I and symplectic if A A ° = I.

• The eigenvalues of a real symmetric matrix, of a complex hermitian matrix and of a quaternion self-dual matrix are all real; those of a real orthogonal matrix, of a complex unitary matrix and of a quaternion symplectic matrix are all of the form e i°, 0 real.

• The eigenvalues of an arbitrary real or complex matrix are in general complex numbers, and the question of the eigenvalues of an arbitrary quaternion matrix is meaningless, since the determinant of such a matrix can not be defined in a reasonable way.

• The problem of r a n d o m m a t r i c e s is very simple to state. Given an n × n real symmetric random matrix (random means that the probability density of the matrix elements is given), what can one say about its eigenvalues? or about a few of its eigenvalues? or of its eigenvectors? or a few of its eigenvectors? The words "real symmetric" in the above sentence can be replaced by any of the following, "complex hermitian", "quaternion self-dual", "symmetric unitary", "unitary", "self-dual unitary", "real" or "complex".

• Why should one worry about such questions? Some history: In the 1920's and 1930's some mathematicians studied such questions in relation to their applications in preparing birth and death statistics for insurance companies. In 1942-1944, Selberg evaluated a related integral and used it to find the density and the distribution of prime numbers. But this paper remained unnoticed for about 40 years. More about it later. In the 1940's and 1950's nuclear bombs and nuclear power stations appeared, and for obvious reasons people wanted to know about the neutron resonance energies of many nuclei such as uranium, cadmium and aluminium; their positions, heights and widths.

According to quantum mechanics each nucleus is described by a hamiltonian function, its eigenvalues giving the positions and widths of the resonances. So choose a base, express the hamiltonian as a large matrix in this base and compute its eigenvalues. The hamiltonian function, i.e. the nuclear interactions are not known so well, and even if known are quite complicated. Hence Wigner, who knew the question very well, suggested that for statistical properties it will be sufficient to take the hamiltonian matrix elements as random numbers. In the 1960's and 1970's a large amount of experimental data about nuclear resonances in various nuclei was collected; also the mathematical problem of random matrices was more or less solved for one particular model, namely when the matrix elements have a normal or Gaussian distribution. A comparison of the two was quite satisfactory. In the 1970's it was discovered somewhat unexpectedly that the zeros of the Riemann zeta function on the critical line behave as if they were the eigenvalues of a complex hermitian random matrix. (A small digression on the zeta function: the zeta function is defined by ~(z) = ~~n°°=l n -z, for Re z > 1 and by analytic continuation for other complex values of z. This function is

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I) Special issue on "Nonlinearity & Chaos in the Physical Sciences"

(3)

Random matrices and matrix models

zero for z = - 2 n , n = 1 , 2 , 3 , . . . and the only other zeros are suspected to be at z = ½ + iTj, 7j real positive (This is the Riemann hypothesis.) The real positive numbers 7j seem to be random and have statistical properties identical to those of the (real) eigenvalues of a large hermitian matrix whose elements are random complex numbers.) Of late it has been found that ultrasonic resonance frequencies of steel or concrete beams, the electromagnetic resonances in closed cavities of random shape, the possible quantum energies of a particle confined in a box of random shape, the positions of trees in a forest, etc. all seem to follow the same statistical laws as the eigenvalues of a random matrix. The mathematical methods developed for the random matrix problem find use in other fields such as quantum gravity and string theory.

A list of references is given at the end of these notes in a general bibliography. [1] and [2] are cited in the text as RM and MT respectively.

2. (Gaussian) Ensembles of matrices 2.1 Invariance under a change of basis

In these lectures I will consider ensembles of matrices, H, whose elements are random (explained below), and which are invariant under a transformation U of the basis, namely

P(UHU-' ) = P(H). (1)

The three cases of importance are when H is

• real symmetric. U is then real orthogonal. This case has the parameter (to be explained below) ~ = 1.

• complex hermitian. U is then complex unitary and ~ = 2.

• quaternion self-dual. This case is unfamiliar, and the following subsection deals with this case in detail. U is then quaternion real symplectic and ~ = 4.

Here ~ is the number of real components of the matrix elements, 1 for real, 2 for complex and 4 for quaternion. Invariance implies that P(H) depends only on the traces of powers of H.

P(H) = P(tr H, tr H 2, ..., tr Hn). (2)

(Exercise: Why only the first n powers? Hint: Only symmetric functions of the eigenvalues survive such transformations.)

2.2 Digression on quaternions

Quaternions (or quaternion numbers) are not so familiar as the real or complex numbers.

Just as complex numbers have two components, quatemions have four components, a = ao + alel + a2e2 + a3e3 = ao + a.e. One calls ao the scalar part, and ~ the vectorial part of a. The dual of a is a o = ao - ~.e-'. The units el, e2, e3 have the multiplication laws

el 2 = e ~ = e 2 = - 1 , e l e 2 = - e 2 e l = e 3 , e 2 e 3 - - - e 3 e 2 = e l ,

e3e 1 = - e l e 3 = e 2. (3)

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I)

(4)

The three 2 x 2 Pauli matrices al, a2 and a3 familiar to physicists, also have similar multiplication laws and therefore it is sometimes convenient to represent the e i and the quatemions as 2 x 2 matrices,

C ( 1 ) = I 1 0 ~ ] , C ( e l ) = [ _ ~

'0] O]

(4)

C ( a ) = r ao+ia3 a l + i a 2 1 [ a o - i a 3 - a l - i a 2 1

L - a l + ia2 ao - iaB J ' C(aD) = al ia2 ao + iaB J" (5) Thus an n x n quatemion matrix will sometimes be represented by a 2n x 2n complex matrix. Usually the aj are taken as real numbers, but we will allow them to be complex.

Thus some non-zero quatemions will not have an inverse. This being admitted, any 2n x 2n complex matrix can be thought of as cut into 2 x 2 blocks and each 2 x 2 block expressed as a quatemion; if these quatemions are real (i.e. their components are real numbers), we call it a quaternion real matrix. This establishes a one to one correspondence between an n x n quatemion matrix A and a 2n x 2n complex matrix C(A). One verifies easily (please do it!) that

C(A o) = - C ( e,I) C(A )T C( etl). (6)

Hence A is self-dual if and only if C(eII)C(A) is an anti-symmetric matrix of twice the size.

We state here one definition and two theorems without proof.

Theorem 1. For a quaternion matrix A it is impossible to define a determinant having the three properties:

(i) det A = 0, if and only if A x = 0 has a non-zero solution z;

(ii) det(AB) = det A.det B;

(iii) det A is multilinear in the rows (or in the columns) of A.

(For a proof see Chapter 8 of MT.) In order to define a determinant of a quaternion matrix one has to give up one or more of the above properties.

DEFINITION 1

For a quatemion matrix A we define

detA = E (-- l )e ( ajd2aj2h . . . aj,jl )o( a~tk2ak2k3 . . . ak, kl )O . . .

P

(7) where the sum is over all permutations P of the indices ( 1 , 2 , . . . ,n) consisting of the exclusive cycles (jl, j 2 , . - - , jr), ( k l , k 2 , . . . , k s ) , . . . . The subscript 0 means that one should take the scalar part of each cyclic product. If the quatemion real matrix A is self- dual, then the subscript 0 in eq. (7) above is not necessary under certain conditions. (see Chapter 8 of MT for details). (Exercise: Which of the three properties of determinants is missing in this definition7).

I0

P m m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I) Special issue on "Nonlinearity & Chaos in the Physical Sciences"

(5)

Random matrices and matrix models

Theorem

2. Let C(A) be the 2n x 2n complex matrix corresponding to the n x n self- dual quaternion matrix A. Then C( el l)C( A ) is a 2n x 2n anti-symmetric matrix and the determinant of A is equal to the Pfaffian of C(elI)C(A)

d e t a =

Pf[C(oI)C(A)].

(8)

(For a proof see Chapter 8 of MT, or devise one by yourself.)

2.3

Statistical independence of the various elements

~-1

P(H) = I-I fj(I-tjj) I I I I f J"~(tI): ))"

(9)

j j<k a = 0

These requirements then force

P(H)

to be an exponential with only first two powers of H.

P(H)

= e x p ( - a t r H 2 + b t r H + c). (10) For a proof consider H =

U-1H~U,

with U - I having non-zero small elements e and - e only in the position (1,2) and (2,1) respectively. Then compare

P(H) and P(H')

to the first order in e. For the anti-symmetric real, anti-hermitian complex or anti-self-dual quaternion real matrices, the same two constraints, namely (i) invariance under an orthogonal real, unitary complex or symplectic quatemion real matrix, and (ii) statistical independence of linearly independent various real parameters entering in the matrix elements, lead to

P(H) =

exp[-a t r H 2 + c].

2.4

Change of variables to eigenvalues

We take H to be either real symmetric, or complex hermitian, or quaternion real self-dual.

The matrix H has n real diagonal elements and

½n(n-

1) non-diagonal independent elements which are real, complex or quaternion real. Thus the total number of real variables in H is

n + ½n(n-

1)/3. Also H has n real eigenvalues. So if we want the eigenvalues of H as new variables, we have to supplement them with

½n(n -

1)/3 extra variables. What is the best choice? The matrix which diagonalizes H also has

½n(n -

1)/3 (angular) real parameters to characterize it. Choosing these angular parameters and the n (real) eigenvalues of H as new variables, the exponential part of

P(H)

is easily expressed since trHJ = Y~k=l 0~. One n has now to evaluate the Jacobian. The result, after some calculation, is

J = 1--[ IOj

- 0kla" f ' (11)

j<k

where 0j are the eigenvalues of H,/3 is 1, 2 or 4 for real, complex or quatemion real H, a n d f depends only on the angular parameters specifying the matrix U which digonalizes H. For anti-symmetric, anti-hermitian or anti-self-dual matrices one should note that the non-zero eigenvalues come in pairs. Changing the origin and the scale, we can thus write the joint probability density of the eigenvalues as

P(H) = C(/3,n)exp(- ~ g l I-[ Ixj - xk'/~'

(12)

\ F 1 /

l<_j<k<<.n

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I)

11

(6)

where

C(/3,n)

is a normalization constant eigenvalues measured from a proper origin.

and xj = Ojx/a-b/2 are

the scaled

3. Some general questions which are well posed

The m-point correlation function Rm(Xl,... ,Xm)

is defined as

n!

f f

g m ( x l ' " " 'Xm) -- (n Z

m)[ ] " " ] P(Xl,...

,xn)dxm+, " ..dxn. (131 In fact

Rm(Xl,...

,xm)dxl ""-dxm is the probability of having one eigenvalue in each of the m intervals dxl . . . . , dxm around the points

xl ...

Xm respectively. When the m points separate into two or more clusters far away from each other, then

Rm

is usually a product of functions depending separately on each cluster. To measure the effect of neighborhood on the probabilities one defines cluster functions

Tin(x1,... ,Xm).

They are also known under the name of cumulants and correspond to connected graphs in the diagrammatical representation of the perturbation series due to Ursell, Yvon-Mayer in statistical mechanics, and more spectacularly due to Feynman in field theory. They are zero whenever the m points divide into two or more sets far away from each other. The functions

Tm

can be expressed in terms of the Rm and conversely. So it is sufficient to evaluate one of these two sets.

Level spacing functions.

One might ask for the probability

E(m, t)

of having exactly m eigenvalues inside an interval of length s = 2t chosen at random. Usually the length of the interval is measured in terms of the local mean spacing. Or, one might ask for the probability

F(m, s)

of having m eigenvalues inside an interval of length s measured from a randomly chosen eigenvalue; or for the probability

p(m, s)ds

that the distance between two eigenvalues lies between s and s + ds, while this interval s contains exactly m eigenvalues inside it. These three sets of functions satisfy the relations

d m . d m

F ( m , s ) = - - ~ j ~ o E ( j , t ) , = p(m, s l = - - ~ j ~ _ F ( j , s ) ,

._ for m > O. (14 /

It will therefore be sufficient to know

E(m,

t) from which the other functions can be deduced. One may of course ask other questions; for example, what is the average number of eigenvalues in an interval s chosen at random? and what is its dispersion around this average? Or how well a linear graph represents the number of eigenvalues in an interval as a function of its length?

Correlation functions for the Gaussian ensembles. The

case fl = 2 of complex hermitian random matrices is mathematically the simplest, since we need only real symmetric matrices and their determinants. For the cases /3 = 1 (most important for applications) and/3 = 4, one needs self-dual quaternion matrices and their determinants.

To evaluate the m-point correlation function we will use the following theorem.

Theorem

3. Let the complex function

f ( z , F)

be such that

(i)

f(x, y) =

If(y, x)]*, (15)

12

Pramana - J. Phys.,

Vol. 48, No. 1, January 1997 (Part I) Special issue on "Nonlinearity & Chaos in the Physical Sciences"

(7)

Random matrices and matrix models

(ii) f

] f(x,x)d#(x) = c,

(16)

/ f(x, y) f (y,

z)d#(y) =

f (x,

z), (17)

(iii)

then the integral of the n × n determinant

det

[f(xi,xj] over

d#(xn)

is, apart from a constant, equal to the (n -

1) x (n - 1)

determinant obtained from the original one by removing the row and column containing xn; i.e.

det[f(xi, xj ]i,j=l,...,n (c n +

1)

det[f(xi, xj)]i,j=l,...,n_ 1.

(18) The same theorem is valid when

f(x,y)

is a quaternion real function satisfying the requirements,

(i)

f(x,y) = [f(y,x)] °,

(ii)

f f(x,x)d#(x) =c,

(19)

(iii) /f(x,y) f(y,z)d#(y) =f(x,z) + ,X f(x,z) -f(x,z)A, (20)

where A is a constant quaternion and the determinant of quaternion matrices is given in Definition 1, eq. (7). As we said earlier the determinant of a quaternion matrix is a convenient way of speaking about Pfaffians of an anti-symmetric matrix. For a proof see chapter 8 of MT. To calculate the m-point correlation function all we have to do is to express

P ~ ( X l ' " " x ~ ) : e x p ( - ~ ~ - - l g ) l<j<k<nH tXj--Xkl~

(21)

as an n x n determinant det[f(x/,

xj

)] with the function f ( x , y) satisfying the requirements of the above Theorem 3. When/~ = 2,

ourf(x,y)

will be a real function and when/~ is 1 or 4, it will be a quaternion real function.

3.1

Alternants

In the last century Vandermonde evaluated the following determinant

det[x~-l]i,J=l,...,n = H (xj - xi).

(22)

l<_i<j<n

Actually, recalling the elementary properties of a determinant one can replace the powers of x by any polynomial of degree j. Thus

det[Cj_l

(xi)]i,j=l,...,n

---- constant H

(xj - xi),

(23)

l<_i<j<_n

where the constant on the right hand side is the product of the coefficients of x y in

Cj(x)

for j = 0, 1 . . . n - 1. The polynomials

Cj(x)

being arbitrary, we can choose them as we

P r a m a n a - J. P h y s . , Voi. 48, No. 1, January 1997 (Part I)

(8)

like. Similarly,

' x

=constant IX (xj xi) 4,

det[Qj_l

(xi), Q j-1 (

i)] i = 1 , . . . , n

j = 1,... ,2n l<_i<j<_n

(24)

where the polynomials

Qj(x) are

again arbitrary, ~ ( x ) is the derivative of

Qj(x) and the

constant is the product of the coefficients of xJ in

Qj(x)

f o r j = 0 , . . . , 2n - 1. We will choose three sets of polynomials

Rj(x), Cj(x) and Qj(x)

corresponding to the three cases /~ = 1, /~ = 2 and/~ = 4 to express

Pa(xl,...

,xn), eq. (21), as an n x n determinant

det[f(xi,xj]

satisfying the conditions (15-17) or (20). (The letters R, C and Q are chosen here to recall that they correspond to real, complex and quaternion matrices.)

3 . 2 0 r t h o g o n a l

polynomials

Given a non-negative weight function

w(x)

for a < x < b, such that ~

xJw(x)dx

is finite for every integer j > 0, one can define a series of polynomials

pj(x)

such that

fa b pj(x)pk(x)w(x)dx = Cj6jk,

(25)

where 6jk is 1 i f j = k and 0 i f j ¢ k. The constants cj are positive and can be chosen to be 1. The polynomials

pj(x) are

called orthogonal; they are called orthonormal if cj = 1.

Actually, these polynomials result from the orthogonalization of the power series (1, x, x2,...) with respect to the symmetric scalar product (x/,

xJ) = ~ x i.xj.w(x)dx.

But this scalar product need not be symmetric and can be chosen otherwise. For the case /3 = 2, we find it convenient to choose the polynomials

Cj(x)

as the familiar orthogonal polynomials with weight

w(x)

= e x p ( - x 2) for - o o < x < oo. For the other two cases ]3 = 1 and ~ = 4, we will choose the polynomials

Rj(x) and Qj(x) as

skew-orthogonal polynomials resulting from anti-symmetric scalar products defined later. We choose the polynomials

Cj(x)

to be proportional to the familiar Hermite polynomials

Hi(x)

= e ~ ( - d / d x ) J e - ~ , and the functions

(~j(x) = (2Jj! x/~)-U2Hj(x)

exp(-x2/2), so that ~j(x) are orthonormal on ( - ~ , c~),

(26)

f ~ ~bj(x)?pk(x)dx

= 6 # . (27)

oo

Then from eq. (23)

e-½(~++~) I-I

(xi

- xj) = constant det [~j-I

(Xi)]i,j=l,..., n,

1 5 j < i S n

(28)

or, for the case 3 = 2,

P2(xl,...,

x~) = constant det[~be_l (Xg)]-det[~be_l (Xk)] (29)

14

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I) Special issue on "Nonlinearity & Chaos in the Physical Sciences"

(9)

where

Random matrices and matrix models

In--1 ]

= constant det E ~bl-1 (X/)gbl-l(Xk)

t = 0 j , k = l , . . . , n

= constant

det[f(xj,xk))],

(30)

(31)

n - I

f(x, y) = E q)t(x)cfit(y).

(32)

~=0

Now let us examine whether this

f(x, y)

satisfies the conditions of Theorem 3. As

f(x, y)

is real and symmetric, condition (15) holds true. Also due to orthonormality of the functions

fSj(x),

eq. (27), one has f~_~

f(x,x)dx = n, and

/_= f(x,y)f(y,z)dy = zz fbt(x)fbm(Z ,fo

~bt (y)~b, (y)dy

e m a - c ~

=E~e(x)~e(z) =f(x,z).

(33) (34) Thus we have expressed P2(xl,... ,xn) as an n x n determinant

[f(xj,xk)], wherey(x,y)

satisfies the conditions of Theorem 3. Using Theorem 3 several times one gets the m- point correlation function in case/~ = 2. For the undetermined constant in eq. (31) it is sufficient to integrate over all the n variables and check the normalization of P2(xl,... ,x~). Thus in case/~ = 2,

Rm

(Xl,...,

gin) = det[f(xi,

x/)]i,j= 1,...,m. (35) The m-point cluster function

Tm(xl,..., xm)

is then

Tm(xl,... ,Xm) = ~ f(xl,xi2)f(xi2,xi3)"" f(xi,,Xl),

(36) where the sum is taken over all (m - 1)! permutations of (2, 3 , . . . , m).

3.3

Skew-orthogonal polynomials

Instead of the symmetric scalar product one may consider an anti-symmetric scalar product

(f,g)Q =

/ ( f ( x ) d g ( x ) - g ( x ) d f(x))w(x)dx

(37)

o r

(f,

g)R

//f(x)g(y)w(x,y)dx dy

(38)

with

w(x, y) = -w(y, x).

Such a scalar product can be used to construct skew-orthogonal polynomials. We will use them to write

Pa(xl,... ,Xn)

for/3 = 4 and/9 = 1 as an n x n quaternion determinant. Let us first take the simpler case/7 = 4, and choose polynomials

Pramana

- J. Phys., Voi. 48, No. 1, January 1997 (Part I)

(10)

Qj(x)

satisfying the skew-orthogonality relations

f o o [Q2j(xlOt2k+l (x) - O/2j(xlQ2k+l

(x)]e-X2dx :

6jk,

o o

f ~ [Q2j(x)Q~(x) - Q~2j(x)Q2k(xlle-X2 dx

= 0,

o ~

f

oo [Q2j+I (x)Qt2k+l (x) - ~j+l (x)Q~+I (x)]e-X2dx : 0, i.e.

f ~

o o

= ~j~,

where ~bj(x) is the quaternion having the 2 x 2 matrix representation [ a2y(x)Q2j+I (x) ]

: L jIx)

and q~j (x) is its dual. Now choose the

quaternionf4(x,y)

as n-1

/,<x, yl : ~ ~jIxl~°(yl.

j=O

T h e o r e m 4.

We claim that

det [f4

(xj, xk)]

= constant

f ~ f4(x,x)e-~ dx = n,

o o

and

(39) (40) (41)

(42)

(43)

(44)

H (Xj--Xk)4'

(45)

l<_j<k<n

f

~ f4(x,

Y)f4(Y,

z))e-Y2dy = f4(x, z).

o o

(46)

(47)

16

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I) Special issue on "Nonlinearity & Chaos in the Physical Sciences"

E E

f4(x,y)f4(y,z)e-Y2dy = dpt(x)dP°m(Z) q~(y)~pm(y)e-Y2dy

(48)

O0 ~ m O0

= Z ~Pt(X)~pI~(z) = f4(x, Z),

(49) which is (47). Also since

qSj(x)

and q~y (x) commute (!),

fn(x'x)e-~Zdx= Z qAy(x)~°(x)e-X2dx

(50)

oo j=O oo

Proof.

From the one has

definition of f4(z, g), eq. (44), and the orthonormality relation, eq. (42),

(11)

G(x,y) =

with

Random matrices and matrix models

n - I oo

To prove (45), observe that det

[f4(xj,xk)]

is equal to the Pfaffian of the 2n × 2n anti- s y m m e t r i c m a t r i x

C(elI)C([f4(xj,xk)]),

w h i c h in turn is the square root o f det

C([fa(xj,

xk)]), i.e.

det[f4(xj, xk)]

= det

C([¢j-1

(xk)]). (52)

But from (43) and (24), this is just (45). Thus we have expressed

P4(xl,... ,x,)

as an n × n quaternion determinant

P4(xl,...

,xn) = constant

det[f4(xj,xk)]-e

-(~+'''+~"), (53) and the function s~(x, y) satisfies the requirements o f Theorem 3. Hence the m-point correlation function can be written as an m x m quaternion determinant

Rm

(xl, • • •,

Xm) =

constant e -(~+'''+x2-) • det [f4

(xj, Xk)]j, k=l,...,m,

(54) or as a Pfaffian of a 2m × 2m real anti-symmetric matrix. The constant in the last equation can be fixed by integrating over all the variables, the result should be 1. Next we take the case/3 = 1. The treatment is simpler when n is even, n = 2r. Let

/;

gj(x) = e(x - y)Rj(y)e-Y2dy,

(55)

o o

where

Rj(x)

is a polynomial of degree j, and e(x) = (1/2)sign(x), i.e. it is 1/2 for x > 0, - 1 / 2 for x < 0 and 0 for x = 0. Let the quatemion

~pj(x)

have the 2 × 2 matrix representation

[ g2j(x) g2j+l (X) ]

C(¢i(x)) = [g~j(x) g~i+,(x) J'

(56)

and we choose the polynomials

Ri(x )

such that

¢(x)¢k(x)dx = jk.

(57)

o o

The quatemion )"~_~

~pj(x)~j(y)

having the 2 × 2 matrix representation

S(x,y)

l(x,y)

] (58)

O(x, y) S(y, x) J

r - 1

S(x, y) = Z [g2j(x)g2J +1 (y) - grzJ (y)g2j+l

(x)]

j=o

r - 1

D(x,y) = Z [g~y(x)g~j+](y) - g~j(y)g~j+l(X)],

j=0 r - 1

I(x, y) = ~ [g2j(x)gz)+l (Y) - g2)(Y)g2j+l

(xll,

j=0

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I)

(59)

(12)

satisfies (15)-(17), i.e.

G(x,y)

is self-dual and

F G(x, y)G(y, z)dy = G(x,

z),

F G(x,

x)dx = n, (60)

o o o o

but its determinant is zero, since the matrix

[G(xj,

xk)] is the product of two matrices

C(~b,(xj)) and C(~b~e(Xk))

of orders 2n x n and n x 2n respectively (~ taking values 0, 1, . . . , r - 1, whilej and k take values 1 , 2 , . . . , n, and n = 2r.) This also shows that the rank of the matrix

[G(xj,xt)]

is n and its last

n columnSr[l(xj,xt),

S(Xk,Xj)] T are linear combinations of its first n columns

[S(xj, xk), D(xj,

xk)] . Therefore if we write

C ( f l ( x , y ) ) = G ( x , y ) - [ ~

e ( X o Y)], (61) then the quatemion fl (x, y) is the dual of fl (y, x) and with some algebra one verifies that

oo fl (x, y)fl (y, z)dy = fl (x, z) + A fl (x, z) - fl

(x, z)A, (62)

(3O

with A : ½el. For this verification one can conveniently use the 2 × 2 matrix representations of various quaternions and the fact that

gj(x) = Rj(x)e -~.

To see that P l ( x l , . . .

,xn)

is proportional to

det[fl(xj,xk)],

one proceeds as follows. The first n columns of C([fl (xj, xk)]) are the same as those of

[G(xj,

xk)], while the last n columns of

C([f(xj,xk)])

differ from those of

[G(xj,xk)]

by [ e ( x j - Xk), 0] 7". Hence

Now

and

Hence

o r

18

• . F S(xj,xk) det C([fl (xj,

Xk

)]) = act

[ D (x j, Xk )

= det[ S(xj,xk)

k O(xj,xk)

l(xj, xk) - e(xy - xk) ] S(xk,xA -c(xj - xk) ]

0 J

= det

[e(xj -

xk)]

.det[D(xj,xk)].

det[D(xj,xk)] = det[(g2e(xj)g2e+t (xt) - g2e(xk)g~+t ' ' ' '

(xj))]

' x ]

g, , ,1 [g2t+l(k)

= det[g~(xj),- 2e+ltxj)][ g~(xk)

= {det[g~_ 1

(xj)]t,j=l,...,,}2

= {det[Rg_l

(xj)]e,j=l,...,n}2e -2(~+'''+~)

det[e(xj - Xk)] = 1.

det

C( [fl (xj,xk)])

= constant (P1 ( x l , . . . , x , ) ) 2

P1 ( X l , " " ' , Xn) = constant det [fl

(xj, Xk)].

Praraana - J. Phys.,

Vol. 48, No. 1, January 1997 (Part I) Special issue on "Nonlinearity & Chaos in the Physical Sciences"

(63)

(64)

(65)

(66)

(13)

Random matrices and matrix models

When n is odd, n = 2r + 1, the working is a little more involved, since we need consider the Pfaffian of a (2r + 1) × (2r + 1 ) anti-symmetric matrix. See RM or MT; the result is similar. The m-point correlation function is again given by (54) where f4(x,y) is now replaced byfl(x,y); as in the case of the Gaussian symplectic ensemble it is an m x m quaternion determinant or the square root of the determinant of an ordinary 2m x 2m matrix. The m-point cluster function is given by (36) where the function f(x, y) is now replaced by either of the quatemion f4(x,y) orfl (x,y), the result being a scalar. The m- point cluster function Tin(x1,... ,xm) is then

Tm (Xl,..., x,n) = E f ( x l , xi2) f(xi2, xi3 ) ' " f(xi,, Xl ), (67) where the sum is taken over all ( m - 1)! permutations of (2, 3 , . . . ,m).

3.4 1- and 2-point correlation functions

The one point function R1 (x) gives the density of eigenvalues at x. For/3 = 2, it is n--1

Rl(X) = E ~2(x) (68)

j=0

and in the limit of large n it is the "semi-circle", f ~ v ~ - x 2, Ixl < v ~ ,

R1 (x) (69)

(

0, Ixl > v ~ .

For/3 = 1 or fl = 4, R1 (x) differs from (68) and in the limit of large n goes to the "semi- circle", eq. (69). Thus the average distance between the eigenvalues near the origin is r r / x / ~ . The two-point correlation functions are different for the three cases, even in the limit of large n. When n ~ c¢, xj ~ 0 and xj v/-~/Tr = yj are constants, one has

R2(x) = 1 - s2(r), ~ = 2, (70)

( r)I =

= 1 - s 2 ( r ) - s(r) • s(z)dz, / 3 = 1, (71)

( )zr

-- 1 - s2(2r) + s(2r) • s(2z)dz, 13 = 4, (72) where r = I Yl -Y21 and s(r) = sin(Trr)/(Trr).

4. Circular ensembles

Three other sets of ensembles (i) symmetric unitary random matrices, (ii) unitary random matrices, and (iii) self-dual unitary random matrices, have been extensively studied. They are defined by the following requirements: (i) The probability density P(A) of a symmetric unitary matrix A should remain invariant under the transformation A ~ WrAW, W unitary, (case/3 = 1), (ii) The probability density P(A) of a self-dual unitary matrix A should remain invariant under the transformation A ~ W°AW, W

P r a m a n a - .L P h y s . , Vol. 48, No. 1, January 1997 (Part I)

19

(14)

unitary, (case/3 = 4), (iii) The probability density P(A) of a unitary matrix A should remain invariant under the transformation A ~ UAW, U and W unitary, (case/3 = 2).

Actually a small variation in A is characterized in the three cases/3 = 1,/3 = 4, and/3 = 2 respectively by (i) dA = WT.idH.W, dH real symmetric, (ii) dA = WO.idH.W, dH quaternion real self-dual, (iii) dA = U-idH. W, dH complex hermitian.

4.1 Probability density for the eigenvaIues

The eigenvalues of a unitary matrix A are of the form e iO, O real. The matrix A can be diagonalized by a (i) real orthogonal, (ii) quaternion real symplectic, or (iii) complex unitary, matrix respectively in the three cases. For the change of variables from A to its

eigenvalues

e iOj one again has to calculate a Jacobian, which in these cases give P~(01,..., On) = constant 1-I lei°l - eiOk[/~' (73)

1 <_j<k<_ n

where/3 is 1, 4 or 2.

4.2 Correlations and cluster functions To compute the correlation function

n! /o 2~r

Rm(O1,... ,Om) - - (n --m)! PO(01,... ,On)dOm+l"" "dOn (74) one can again use Theorem 3, if one can express P a ( 0 1 , . . . , On) as an n x n (ordinary or quatemion) determinant [f(0j, Ok)] with f(~, 7) satisfying the conditions of that theorem, namely, (i)f(~, r/) is the complex conjugate or dual off(l/, (), (ii) fEar f(~, rl ) f07, ( ) d r / = f(~, () + )~f(~, () - f ( ~ , ()A, A a constant quaternion, and (iii) f2,~f(~, ()d~ = c, a real or

a scalar number. The construction of such a matrix [f(Oj, Ok)] is similar to and somewhat simpler than in the case of Gaussian ensembles. The results are as follows. The case /3 = 2 is again the simplest, one needs only ordinary matrices and determinants, no quaternions.

f2 (~, r/) - 1 ~-, eip(~_o) = Sn (~ - r/), (75)

- 27rZ...~

P

sin(nO~Z)

Sn(O) -- 27rsin(0/2) (76)

the sum over p in (75) is over the values

P = - ! ( n 2 - 1 ) , - ½ ( n - 3 ) , . . . , ½ ( n - 3 ) , ½ ( n - l ) . (77) For cases/3 = 4 and/3 = 1 one needs quaternion matrices and their determinants. The results are

1 F - ,7) lz.( - ,7) ]

= LD ( _ ,7) szn(w -

20

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I) Special issue on "Nonlinearity & Chaos in the Physical Sciences"

(15)

Random matrices and matrix models

c(f (4,

7)) = [ o . ( 4 - ,7) s (o - 4)

(78)

with Sn (0) given by (76) and

Dn(O) = d s ~ ( o ) (79)

i (o) = s (4)rg

en( O) = ~ E (iq)-l e iqO, 1 (80)

q

where the sum over q is over +(n + ! ) , :t:(n + 3), When n ~ ~ , Oj ~ O, and

2 . . . .

nOj/(2~r) = yj finite, then S,(O) ~ s(r), Dn(O) ~ d/drs(r),

t~(0)~ f~s(4)d4,

and the

correlation functions Rm(yl,... ,Ym) become identical to those for the corresponding Gaussian ensembles. In other words, the statistical properties of any finite number m of the eigenvalues in the limit of large n are identical for (i) Gaussian orthogonal and circular orthogonal ensembles, (ii) Gaussian unitary and circular unitary ensembles, (iii) Gaussian symplectic and circular symplectic ensembles.

4.3 Relation between the circular orthogonal and circular symplectic ensembles Take 2n-dimensional matrices in the circular orthogonal ensemble. The joint probability density for its eigenvalues is I-L<__j<k<_2~ I eioj - ei°t I. Ordering the eigenvalues around the unit circle 01 < 02 < --. < 02~, if we integrate over alternate eigenvalues, say 01, 03, ....

02~-1, then we get

constant

1-Ii<j<k<_n(e i02j - ei02k) 4, which is proportional to the joint probability density of the eigenvalues of an n x n self-dual unitary matrix. In other words, the statistical properties of n alternate angles Oj, where e i°j are the eigenvalues of a symmetric unitary matrix of order 2n x 2n taken from the orthogonal circular ensemble, are identical to those of the n angles ~bj, where e i~j are the eigenvalues of an n x n quaternion self-dual unitary matrix taken from the circular symplectic ensemble.

(Theorem 10.6.1 of RM.) A similar relation holds between a random superposition of two orthogonal ensembles and the unitary ensemble. More about this later.

5. Spacing functions E~(r, t)

Among other quantities in the random matrix theory the spacing functions have some importance. The spacing function E~(r, t) is the probability that a randomly chosen interval of length 2t contains exactly r eigenvalues inside it. We will be interested in the limit when n is large. The case /~ = 2 is again mathematically the simplest. To find E2(0, t), say, one has to integrate P 2 ( x l , . . . , xn) over all the variables Xl . . . x, outside the interval of length 20. For simplicity we will chose the interval as ( - 0 , 0):

E2(0,0) = lout e2(x1,... ,xn)dXl"" dxn. (81)

Pramana - J. Phys., Vol. 48, No. 1, January 1997 (Part I)

21

(16)

But

with

1

P 2 ( x , , . . . , X n ) = ~.I (dett~by_l (xt)] 2

= det M, (82)

n

Mjk = E Cbj(Xl)~k(Xe).

(83)

e=l

In order to compute E2(0, t), write the first row of the matrix M as the sum of n terms and the determinant as a sum of n determinants. Since all the other rows of M are symmetric in

xj,

all these n determinants will give the same answer on integration. So we can replace the first row of M by ~b0(xl)~bt(xl) and multiply the result by n. Now subtracting suitable multiples of the first row from other rows we can eliminate xl from all of them. The n - 1 rows o f M now contain x2 . . .

xn

symmetrically, therefore we can replace the second row of M by ~bl (x2)~bk(x2), multiply the result by n - 1 and eliminate x2 from the remaining rows. And so on. Thus we get

E2(0, 0) = [ det [~bj-1

(Xj)~)k-1

(Xj)]j,k=l,...,ndX1 " ' " d x n . (84)

,/ou t

Since each variable occurs in only one row, we can integrate over them separately, E2(0, 0) = [ det [~bj-1 (xj)~k-1 (xy)]dxl... dxn

Jou t

= det [t~jk -- f_Oo ~j(x)qJk(X)dXl

.I j)k=O,...,n- 1 n - I

= l-I(1 - Aj), (85)

j--0

where Aj are the eigenvalues of the symmetric matrix

F

= (86)

0

i.e. they are solutions of the algebraic equation,

det[A6jk -

Gjkl

= 0, (87)

or of the integral equation

f

),¢(x) = rn(X,y)¢(y)dy, (88)

0

with the kernel

n--I

rn(x,y) = ej(x) j(y/. (891

j=O

22

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I) Spedal issue on "Nonlinearity & Chaos in the Physical Sciences"

(17)

Random matrices and matrix models

(Verify this! Hint: solutions of (88) are of the form

~_,7=-d ci@(x),

with the

ci

not all zero.) When n ~ c~, 0 --~ 0, while 0 x / ~ / T r = t is finite,

( sin (x- y)

- - ^ n - - x , - - y - - , (90)

The average distance between the eigenvalues being (Tr/v/~), we measure distances in this unit. The integral (88) in the limit n ~ oe is

f'

A~b(x) =

K(x, y)~b(y)dy,

(91)

t

with

K(x, y) -

sin

7r(x - y)

(92)

7r(x - y)

Much is known about this kernel in the literature. It is the square of another kernel

(2t)-l/2e i=y#

over ( - t , t ) ; it commutes with the differential operator (x 2 - 1)d2/

dx 2 + 2x d / d x + ~ x 2 and hence has common eigenfunctions with it. The solutions of (91) are known as spheroidal functions, have been extensively studied and tabulated. The eigenvalues A of (91) all lie between 0 and 1, can be ordered as 1 > )~0 > A1 > )~2 _ - . . , and when so ordered the eigenfunctions are alternately even and odd. The limiting form of (85),

oo

E2(0, t) = H ( 1 - )~i) (93)

i = 0

is a fast converging infinite product. In (93) above, we have written t for convenience, though the length of the interval is 2t. The Aj : Aj(t) depend of course on t. To find E2(r, 0), one has to evaluate the integral

E2(r,O) r!(n--r)! P2(Xl,...,xn) a(xi) I I

( 1 - a ( x j ) ) d x l

...dx~,

i=1 j = r + l

(94) where

a(x)

= 1 if Ixl < 0 and

a(x)

= 0 if Ixl > o. Again

P2(Xl,... ,x~),

(82), can be written as a sum of determinants. Whenever any variable

xj

occurs in two or more rows, these rows are proportional and the determinant is zero. Only those determinants survive in which each variable occurs in one row only, so we can replace

P2 (xl,..., xn)

in (94) by

1 E

det[ j_l (95)

2.'

the sum being over all permutations ( t l , . . . , tn) of ( 1 , 2 , . . . , n). As each variable occurs in one and only one row in any of these determinants, we can integrate over

xl . . . Xr

from - 0 to 0 and over ( x r + l , . . . , xn) outside the interval ( - 0 , 0). Then we expand each determinant in the Laplace manner according to the r rows containing initially the variables xl . . . Xr. This gives

1 E 4- det

G(i; j)

det [1 - G](j'; i'). (96)

E2(r, O) = ~..

P r a m a n a - J . P h y s . , Vol. 48, No. 1, January 1997 (Part I) Special issue on "Nonlinearity & Chaos in the Physical Sciences"

(18)

The indices i = - ( i l , . . . , i r ) are chosen from ( 0 , . . . , n - I ) as also the indices j = ( j l , . . . ,jr). The indices ( i l , . . . , i r ) are not ordered, while the indices ( j l , - . . ,jr) are ordered, jl < j2 < "'" < jr. The sum in the above equation is taken over all choices of indices satisfying the above conditions. The cofactor [ l - G ] ( f ; i ' ) o f order (n - r) x (n - r) is obtained from [I - G] by omiting the rows (i) and columns (j), its determinant is apart from a sign

det[l - G](j',i') = + d e t [I - Gl-det [(I - G)i,j] . - 1 (97) Ordering the indices i l , . . . . ir, 0 < il < "'" < ir ~ n -- 1, and taking into account (97) we can write (96) as

E2(r, 0) = det [I - G] Z det G(i; j) det [I - G] -l (i; j).

(i;J)

(98)

Diagonalizing G as in § 4.1 above, in the limit n ~ c~, 0 v ' ~ / T r = t constant, one has

/•0

/~Jl A jr

Ez(r,t) = ( 1 - A i ) E 1 ---AJ, " ' 1 - ) ~ j ,

"= (j)

t h e

introduce an extra variable z and write

(99)

sum being taken over all integers j l , . . . , j r , with 0 _< jl < j 2 < " ' " < jr- If we

OO

D(Z, t) = I T ( 1 - ZAi(t)), (100)

i=0

then (93) and (99) can be written as

( ~ Z ) r

E2(r, t) = - D(z, t)[~= 1.

5.1 Spacing function for the circular unitary ensemble

(101)

To compute the spacing functions for circular ensembles is as easy or as difficult as for the Gaussian ensembles. From

one has

with

le io _ ei¢ I = -t-4ie-(O+4,)/2(e ia _ ei¢),

H [e i°j - ei°k[ = constant e -½("-1)(°1+'''+°") 1-[ (ei°l - el°k)

l<j<k<_n 1 <j<k<_n

= constant det[ei~J],

p = - ½ ( n - 1), - ½ ( n - 3 ) , . . . , 2 1 ( n - 3 ) , ½ ( n - l ) ,

(102)

(103)

(lO4)

24

Pramana - J. Phys., Vol. 48, No. 1, January 1997 (Part 1) Special issue on "Nonlinearity & Chaos in the Physical Sciences"

(19)

Random matrices and matrix models

and j = 1 , . . . , n. The case/3 = 2 is again the simplest, P 2 ( O i , . . . , 0n) = constant det[e iv°j]

det[e -iqOj]

= constant det 1 e i(p-q)Oj , (105)

j=l J

where p and q take values as in (104). The probability that the interval ( - a , a) does not contain any eigenvalue of a random unitary matrix taken from the circular unitary ensemble is therefore

l

E 2 ( 0 , o~) = constant d e t e i(p-q)°j d01 . . . don

~o L~j=~ J

= det [ ~--~ ~2~-~ ei(p-q)° dO]

[

1

cos(p -

q)OdO ]

= det ~vq - ~ ~

s i ~ O ~ -

~]

= d e t ~pq ~ q) ], (106)

the constant is fixed from the normalization E2(O, O) = 1. Also

E [6pq -- COS(p0)COS(qO)l[~qr -- sin(q0) sin(r0)] = [6pr - cos(p -

r)O],

q

so that

with

Note that

1 cos(p )0d0

F+F_,

F = det - ~ ~ - q =

'L 1

F+ = det

q - ~ (pO) cos( qO)dO ,

F_ --- det - ~ (p0) s i n ( q 0 ) d . .

1 f a 1

rsin(p

- q ) a sin(p + q)a]

c~ c°s(pO) c°s(qO)dO = 2 [ 7r(p - q) -~ ~r(p + q) ] '

(107)

(108)

(109)

(110) and

f ~ 1 [sin(p - q ) a sin(p + q)a.] (111)

1 sin(p0)

sin(qO)dO = ~ L - ~ - - q) 7r(p + q)

J '

are the even and odd parts of sin(p -

q)a/(rc(p - q)).

When n -~ oo, a ~ 0 and

na/

(270 = t is finite, the limits of the determinants D, D+ and D_ are the values of

D(z,

t),

Pramana - Y. Phys.,

Vol. 48, No. 1, January 1997 (Part I)

(20)

D+ (Z, t) and D_ (Z, t) for z --- 1,

o o

F(z,t) = H ( 1 - ZAi),

i=0

o o

F+(z, t) = H ( 1 - ZA2i),

i--0

F_ (z,

t / = II(1 - (112)

i=0

where Ai are the eigenvalues of the integral (91). A2i and

~2i+1

correspond respectively to its even and odd solutions, i.e. they are the eigenvalues of the integral equation

)~O(x) = K+(x,y)~(y)dy, (113)

t

where K+(x,y) and K_(x,y) are the even and odd parts of K(x,y), Ka:(x,y) =

½[K(x,y)

+ K ( - x , y ) ]

1 sinTr(x-y) sinTr(x+y)]

= 2 7 r ( x - y ) 4- 7r(x+y) J" (114)

5 . 2 0 r t h o g o n a l ensemble, ~ = 1

For ~ = 1 , integration over alternate variables is possible as follows. We order the variables as xl <x2 < -.- <x~, so that

H IxJ - xkl = H ( x y - x/c) = det [~-']j,k=L...,n,

k<j k<j

(115)

o r

P1 (xl , . . . , Xn) = constant det [(bj-l (Xk )]j,k=L...,n , (116) where xl < x2 < ..- < xn and (by(x) are orthonormal functions, (26). Note that (i) In P l ( x l , . . . ,xn) we have changed the weights from e x p ( - ~ - ' ~ ) to e x p ( - ) - ~ / 2 ) ; this does not matter much since one can change the scale of the xj ; (ii) In the determinant in (116) we can replace (b2j+l(X) by (b~j(x) if n is even, n = 2m. This is because (b~(x), (bj-l(x) and ~bj+l(x) have a linear relation (verify!)

v 5 6 ( x ) = (x) - x/J + l(bj+,(x). (117)

To see this, we replace (b2m-1 in the last row by a linear combination of (b~-2 and (b~-3, and eliminate (b2m-3 in this row by adding a suitable multiple of a previous row. Then replace the row ~ - 3 by a linear combination of (b~,-4 and @m-5, eliminate (b~-5 in this row and so on, till we reach the row (bl which we replace by (b~.

(iii) When n is odd, n = 2m + 1, by a similar procedure we can replace the rows (b2j by (b~j-1, except for the first row which remains @. Let us come back to our calculation of

26

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I) Special issue on "Nonlinearity & Chaos in the Physical Sciences"

(21)

Random matrices and matrix models

El (r, t). By definition

E l ( 0 , 0 ) =

el(x ,...,xnlI-[u(xjl j

j = l

= c o n s t a n t det [x~ -1]

Hu(xj)e-4/2dxj,

(118)

<x2<'"<x,, j = l

where

u(x)

= I, if [x I > 0 and

u(x)

= 0, if Ix] < 0. With the ordering of the variables one has to multiply the result in the last line by n! which is absorbed in the constant. As xl occurs only in one column, w e can integrate over it. This replaces the first column by

Fj(x2) = j f ~

u(xl)x~e-~/2dxl,

j = 0, 1 , . . . ,n - 1. (119) Now x2 appears in two columns and so we integrate over x3 replacing the third column by Fj(x4) -

Fj(x2).

But we already have a column

Fj(x2),

add it to the third column, which becomes Fy(x4). In the same way, integration over xs, x7 . . . . replaces the corresponding columns by Fy(x6),

Fj(x8)

. . . If n is even, n --- 2m, each of the remaining variables x2, x 4 , . . .

,X2m

occur in two columns,

[Fj(x2k),xJ].

If n is odd, n = 2m + 1, there is an extra column

Fj(~)

on the extreme fight. We have still to integrate over x2, x4 . . . X2m with the restrictions :rE ~ x4 _ - " ~ X2m, and each variable occurs in two columns. But interchanging any two variables we interchange two columns with two other columns and the determinant does not change. So the integrand is symmetric in the remaining variables and we can remove their ordering provided we divide the result by m!, which will be absorbed in the constant outside.

//

E1 (0, 0) = constant

det[Fj(x~),

u(x~)x~e-~]ax2 dx4.., dx~,. (120) In case n is odd, n = 2m + 1, there is an extra column

Fj(~).

For simplicity we will take n to be even, n = 2m. If we expand the determinant in (120) by the first two columns containing

x2,

then the cofactor by the two columns containing x4, and so on, and integrate over

x2, x4, ....

we observe that (1) the integral is a sum of terms, each term being the product of m factors of the form

//

ajk = [Fj(x)x k - Fk(x)xJ]u(x)e-:/2dx = --akj, ( 1 2 I ) (2) the indices of the various factors

ajk

in any term are all distinct, and they are in all (0, 1 , . . . , 2m .- 1). (3) We can restrict] < k in each

ajk,

since i f j > k, then we replace ajk by

--akj.

(4) The coefficient of the term

aj, y2aj~j4.., aj~_~j~

is +1 or - 1 according as the permutation from (0, 1 , . . . , 2 m - 1) to

( j l , h , . . . ,hm)

is even or odd. Therefore the result is a Pfaffian

El(O, O) = constant E

4-aju2ajaj4""aJ~-'J~

(J)

= constant P f [ajk]j,k=0,1,...,2~-I

= constant

(det[ajk]

j, k=0,1,..., 2m-1 ) 1 / 2 (122)

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I)

(22)

with ajk given by (121). One has from the symmetry a2j,2k = a2j+l,2k+l = 0,

SO half the matrix elements ajk are zero, and the matrix in (122) has the form

°

0 al2 0 . . . A = a21 0 a22 . . . a30 0 a32 0 . . .

Collecting the non-zero elements in one comer one has det [aj~] = (det [a2j,~+l]) 2.

Hence

(123)

(124)

(125)

El(0, 0) = constant det [a2j,~+l]j,k=O,...,m-l. (126) Without changing anything in the above argument we could have replaced ~x~. e-~/2 in the Vandermonde determinant (118) by ~b~(xj) and

jx~. +1e-4/2 by q~,(xj).

With this choice (123) is still valid and

f

a2j, zk+t = 6jk -- ~j(x)fb2k(x)dx. (127)

0

(Verify it!) Therefore

El(O,O) = det 6jk - o .Ij, k=O,1,...,m-I m - l

= H (1 - ,~2i), (128)

i=0

where •2i a r e the eigenvalues of the integral equation

/:

~ f(x) = K.+(x,y) f(y) dy,

o

with K.+(x,y) the even part of K~(x,y),

m--1

Kn+(x,y) = E d p z j ( x ) c ~ j ( y )

(129)

j=0

= ½[r.(x,y) + x . ( - x , y ) ] , (130)

with Kn(x,y) given by (89). As n --~ oc, while x x / ~ / I r and y v ' ~ / z r are kept finite, the integral (129) goes over to

f

)~f(x) = K + ( x , y ) f ( y ) dy, (131)

t

28

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I) Special issue on "Nonlinearity & Chaos in the Physical Sciences"

(23)

Random matrices and matrix models

with

1 . [sin~r(x- y) q sin~r(x +_y).] (132)

K+(x,y) = ~ [ ~r(x- y) ~r(x + y)

]' and we have the result

El(0,t) = F+(1,t) (133)

with

F+(z, t)

given by (112).

The calculations of El (2r, 0) and of El (2r + 1,0) are long and I will not write them here again, referring you to either the relevant pages of RM or to the article by Mehta and des Cloizeaux (see the list of references). The result is

(1 r )

--~A2i'~2il 1 ----A2ir~2i,

-- Ef2ij(t) [ f2ij(x) dx

El(2r, t ) = E l ( 0 , t) ~ 1 "'"

it<"'<ir j=l d -t

(134) E l ( 2 r - 1,t) =

El(O,t)

<~<

A2il )~2i, ~ f2~)(t) -

it .-- i, 1 -~2i. "" 1 -A2i, ]=l

tf2iJ(x)dx'

(135) where j~(x) are the solutions of the integral equation, (91) normalized as

f

' yj(x) f (x) dx = 6j, . t

From (134) and (135) one sees that

E1 (2r, t) + El (2r - 1, t) = E1 (0, t)

and

"~2il )~2ir 1 --'~2h 1 -- A2i, il <'"<it

( ~ Z ) r

= -

t+(z, t)lz=l

(136)

(137)

5.3

Circular orthogonal ensemble

For the circular ensemble, case/~ = 1, one can use exactly the same method of integrating over alternate variables. Let us compute the average

P r a m a n a - J . P h y s . , Vol. 48, No. 1, January 1997 (Part I)

)~2il )~2ir El(Er, t) + El (2r + 1, t) = E1 (0, t) ~ 1 -- ,~2il 1 -- A2i,

il <'"<it

x 1 - ~ . 1--A2jf2j(t ) _ f2j(x)dx

(138) J

which will be shown later to be equal to

- ~ F_(z,t)lz= 1

(139)

(24)

I ' >

" - -

IIu(e; IIv(o,)

\ air alt

f n

= e l ( 0 1 , . . . , 0 ~ ) d 0 t . . , d02m

u(Oj) v(Ok),

(140)

~" ah alt

where

u(O) and v(O) are

arbitrary functions defined over (-Tr, ~) and for simplicity we have taken n even, n = 2m. The product 1-Lt is taken over a set of alternate points 0~

as they lie on the unit circle, and ~ t is the product over the remaining alternate points.

From (103), taking the 0j to be ordered -Tr < 01 ~ 02 _< ' ' " 5 02m ~ 71", one has

n = const t det[e"°q (141/

~r j=l

As in § 5.2, we integrate over one set of alternate variables, say 01, 03 . . . 02m-1, remove the ordering of the remaining variables and then integrate over them. For this purpose we define

ap, q

= I f

/A(0)y(~b) (e i(pO+q(b) -- e i(p(b+qO))dO &b, J J-Tr < 0<4~<_~r

(142) then as in § 5.2, the result is

H 2 = c o n s t a n t

det[ap,q], p , q = - m + l , - m + ~ , . . . , m - ~ , m -1.

(143) Integration in (142) can be done explicitly for simple functions u, v, and factors like

lip

can be removed outside. Also we can invert the order of the columns. So setting

bp,q = - ~ ap,_q,

ip

(144)

one has

n 2 = constant det

[bp,q].

(145)

If

u(O)v(~) = u(-O)v(-~),

then

b_p,_q = be, q,

and there are further reductions, now H itself can be written as a determinant,

H = constant det [Fp, q] (146)

with

Fp,q = bp, q q- b_p,q

= ~ f f_~<o<~<_u(O)v(~b)[cos(pO)sin(q(~)-cos(p~)sin(qO)]

d0d~b.

(147) If

u(O)= v(O)=

1, then

Fp,q = 6p,q,

and the constant in (146) is 1 for correct normalization. If

u(O) = v(O)

= 1 for -Tr + a < 0 < 7r - a, and

u(O) = v(O)

= 0 for

Pramana

- J. P h y s . , Vol. 48, No. 1, January 1997 (Part I) 30 Special issue on "Nonlinearity & Chaos in the Physical Sciences"

(25)

R a n d o m matrices a n d matrix models

7r - c~ < 0 < 7r + a, then

Fp,q = 6p,q - ~ cos(p0) cos( qO)dO

Oz

sin(p - q)~ sin(p + q)~

- - 6~'q '~0~ - q) ~(p + q) , (148)

and det[Fp, q] in the limit of large m with m a / I r = t becomes F + ( 1 , t ) . Next choose u ( O ) = v ( O ) = l f o r - T r + a < 0 < T r - a , u ( 0 ) = 0 , v ( 0 ) = 2 , f o r T r - a < 0 < T r + a . This gives

Fp~q = ~p,q sin(p -- q)a + sin(p + q)a (149)

7r(p -- q) 7r(p q- q) '

and H = det [Fp, q] is the probability for the interval (Tr - a, 7r + a) to contain either 0 or 1 eigenvalue, i.e. H = E1 (0, a) + E1 (1, a). (Why v(O) = 2 and not 1 in (zr - a, 7r + a)?) In the limit of large m and m a / r r = t, this determinant goes over to F_ (1, t).

6. Random matrices in physics

In physics three discrete symmetry operations parity, charge conjugation and time reversal have gained popularity. Here we will be concerned with only the last, the time reversal operation T. From physical considerations time reversal has to be an anti-unitary operator. In the Schfbdinger equation -iO~b/Ot = H(;, H is often real, so the validity of the same equation with t ~ - t means changing the sign of i or taking the complex conjugate. As any antiunitary operator, T can be written as T = KC, K a unitary operator and C the complex conjugation operator. When the representation of the states is changed by a unitary operator, ~b ~ U~b, T transforms to

T ~ U T U - l = U T U t, (150)

or K transforms to

K ~ U K U r. (151)

Operating twice with T should leave the physical system unchanged, i.e.

T 2 = a - l , lal = 1, (152)

where 1 is the unit operator. Or

T 2 = K C K C = K K * C C = KK* = a l . (153)

But K is unitary,

K K t = 1. (154)

So

r = ~/¢~ = ~ ( ~ x ~ ) ~ = ~ 2 r . (155)

P r a m a n a - J. P h y s . , Vol. 48, No. 1, January 1997 (Part I)

31

References

Related documents

I hereby declare that this dissertation entitled Preclinical Validation of Anti-Anxiety, Anti-Depressant and Anti-Convulsant activity of classical Siddha drug

The slowdown in general in the secondary market results in a significant amount of reduction in demand by investors in the primary market which in turn would also result

ANTI- CANCER ACTIVITY OF RASAPARPAM PAGE 127 Anti-cancer activity through HeLa cell line , anti-tumor activity through HeLa and SiHa cell line models and anti-oxidant

I hereby declare that this dissertation entitled “Pre clinical study of siddha drug Rajakesari chooranam for it’s Bronchodilator,anti- spasmodic,anti-histaminic

In such a situation, it is considered that neither the sales prices of the NME producer in the domestic market of the exporting country nor a normal value constructed on the basis of

• An orthogonal matrix is a square matrix (over the real field) whose inverse is equal to

(3) Diagonal Matrix: A square matrix whose elements above and below the principal diagonal are all Zero is called diagonal matrix... (ii) For anti-symmetric matrix the

When traditional literatures were reviewed, it revealed that Manosilai has Anti- pyretic and Anti histaminic properties, Sangu has Anti-inflammatory and Anti