• No results found

Direction of arrival estimation using state space modeling

N/A
N/A
Protected

Academic year: 2022

Share "Direction of arrival estimation using state space modeling"

Copied!
5
0
0

Loading.... (view fulltext now)

Full text

(1)

Direction of Arrival Estimation Using State Space Modeling

Surendra Prasad and Bindu Chandna Department of Electrical Engineering,

Indian Institute of Technology, New Delhi 110016, India

ABSTRACT

In this paper, we present a class of High Resolution Direction Finding techniques for passive, uniform linear arrays. These are based on the state space parameterization of a linear array data. Directions- of-Arrival (DOAs) are obtained as the eigenvalues of the state transition matrix. These techniques are also applicable to the situation of source coherence in array processing problem, which have required special attention in the past.

I. INTRODUCTION

Here we propose a class of High Resolution Direction Finding techniques, which are based on the "spatial state variable formalism" of a linear array data. These techniques have been motivated by the fact that, for the case of plane wave sources, the spatial array data can be modeled as an autonomous ARMA (Auto Regressive Moving Average) process, with poles (corresponding to the arrival directions) lying on the unit circle [8].

Since the state transition matrix in the state-space description of an ARMA process contains all the information regarding the pole locations, it follows that the DOAs can be estimated from an estimate of this matrix or its parameters. The problem of DOA estimation then, reduces to that of fmding the eigenvalues of the 'state transition matrix".

Several approaches for solving this model identification problem (hence the DOA estimation) are presented here, all of which share the interesting feature that they are applicable irrespective of whether the sources are correlated or not.

II. PROBLEM FORMULATION A. Measurement Model

Consider a uniformly spaced linear array with N sensor elements. Assume that there are D narrow band, stationary, zero mean sources centered at frequency w0 and sufficiently far from the array to allow the planar wavefront assumption. The additive noise present is also assumed to be a stationary, zero mean Gaussian random process. The received signal at the i'th sensor at time t can be represented by

D

I ^

k = l

exp[-ju>0(i-l) d Sin0k/c

(1) for i = 1, ..., N.

where

sk(«) : the signal emitted by the kt h source as received at sensor 1

0k : the direction of arrival of the kt h source d : inter-element spacing (assumed to be less

than half of the signal wavelength)

c : the speed of propagation

ni(«) : the additive noise at the it h sensor.

In matrix notation, the above equation can be re- written as

r(t) = ), ..., rN(t) ]T

= A s(t) + n(t) (2)

where s(t) is a (Dxl) vector representing the complex envelopes of signals emitted by the D sources, and A is the (NxD) direction matrix whose columns {a(0k), k = 1, ..., D} are the direction vectors defined by

where 7k =wo(d Sin#k)/c is the propagation delay corresponding to the k source.

B. State Variable Model

The general form of the state variable model for the spatial ARMA process can be written as

= F rk(t) = hzk(t) where

nk(t), fork = l, ..., N

(3) (4)

(nxl) state vector at kt h sensor at time t (nxn) state transition matrix

(lxn) output or observation matrix The spatial frequencies given by exp(j 7k) form the poles of the autonomous ARMA model given by (3) and (4). Since the poles of this model are given by the eigenvalues of F, it follows that the DOA's can be estimated once the F corresponding to a particular state-variable representation is known. Thus, we have

or,

= kt h eigenvalue of F = exp(j7k),

k = suT^c 7k/ u>0 d) (5) If the number of sources present is D, the dimension of the state vector, n, is required to be at least D.

III. CANONICAL REALIZATIONS

Some of the state-variable characterizations that seem potentially relevant in the context of array signals of interest here, are as follows :

A. The Diagonal Canonical Realization (D.C.R.) In this realization, the state variables are chosen, so as to render the state transition matrix F in the diagonal form. Since the diagonal values of F in this

(2)

case are also its eigenvalues, it follows that F can be written as

F = diag{exp(J7i), -, exp(J7n)} (6) with each diagonal element yielding one of the source directions.

Defining the state variables as follows:

*i(o = t s,(t),..., at)]

1

co

and Zk+i(t) = F* z^t), (8) the observed array data can be expressed in terms of these state variables as

r = [ hT, ( h F )T, . . . , ( h FN-1)T]Tz1

= OZ l (9)

where

h = [ 1 , 1 , . . . , 1] (10) is a vector of dimension n and the matrix O is defined as

O = [ hT, ( h F )T, . . . , ( h FN-1)T]1 (11) It follows from (9) that the covariance matrix of r can be written as

= OSOf (12)

where S = E [zx(t) z ^ t ) ] is the source (or, in this case, the state) covariance matrix. [.]* denotes the hermitian transpose of the matrix [.] and E [.] denotes the expectation operator. S would be singular and non-diagonal if some of the sources are coherent. The rank of the matrix C then, degrades and it assumes a non-Toeplitz form. In this situation, using spatial averaging instead of time averaging, then ensures that asymptotically we will obtain a Toeplitz matrix C( t )

which would have rank n. The elements of this matrix are obtained in terms of the correlation coefficients c(k) between the it h and (i+k)t h sensors based on spatial averaging. Thus we have

N

c(k) A. Lim 1 I Ti [ri + k]* . N-«o N ; = 1

= h F* G hf , where

G A Lim JL N

(13) (14)

(15) is the (nxn) diagonal state covariance matrix. The Toeplitz covariance matrix C( t ) is constructed as

C(t) =

c(0) c(2)

c(0)

(16)

This can be decomposed as

O [

OGO1 (18)

where O is the observability matrix which, for the asymptotic case, represents the array steering matrix corresponding to an infinite aperture linear array.

Let C( t ) = U S Uf be the EVD of C(fc). Now an estimate of O can be taken as

O = U S * , (19) where S* is diagonal matrix, whose diagonal elements are square roots of those of S. From the structure of the observability matrix O = [hT, (hF)T, (hF2)T, ...]T, it follows that

O F = [(hF)T, (hF2)T, ... ]T = Ot , (20) where [.]t denotes the upward row-shifted version of the matrix [.] with the first row removed and a last row of added zeros. Hence, a least squares solution for F can then be obtained as

O*Ot (21)

and the eigenvalues of F give the DOAs of the sources as in (5).

In actual practice, since the array length is finite, a combined spatial and temporal averaging is used to construct a matrix C( t ), which replaces C( t ) in (16).

This method works equally well for the case of coherent sources because of the new form of spatial smoothing technique used here that does not reduce the aperture of the array.

B. Companion-Form Canonical Realization (C.F.C.R.) Here we suggest another technique which identifies the state transition matrix in its companion form [3], starting once again from c(k) as estimated via spatial averaging process (13). However, estimation of F now proceeds differently, with the specific objective of identifying F in its companion canonical form, which is given by

F = 0 | 0 | 0 | -Mi

1 0 . . . 0 1 . . .

-M2 -

00

1

"Mn (22)

with n unknown parameters, viz. the coefficients {/*!, ..., /!„}. Starting with the asymptotic quantities, we can write from (14) :

c(k) =

f h G hf , k = 0 [ hF^Gh* , k>0 This relation can be equivalently expressed as

(23)

383

(3)

= B G hf

c(n)

where n is the model order and

(24)

B =

h F

hFn (25)

Using the Cayley Hamilton Theorem [1], it can be shown that [9]

c(n+j) = - (26)

Eqn. (26) can be written equivalently in the matrix form as follows

() c(n + 2)

c(2n)

c(2) ... c(n) c(2)

c(n) ... c(2n-l) un (27) from which the coefficients {nly pz, ..., ^n} can be estimated. In this case also the quantities c(i) are nothing but the spatially smoothed correlation coefficients. Hence, the proposed algorithm may be expected to yield accurate source directions even when the sources are fully correlated.

C. Canonical Variate Realization (C.V.R.)

In this form, no prior structure is assumed for the state transition matrix or the state variables, and these are allowed to evolve from a canonical variate analysis of the data that is being modeled [8, 9].

The complete array is divided into a number of overlapping subarrays - left subarrays of length 'p' each, consisting of the sensor elements {1, ..., p}, {2, ..., p + 1} and so on and right subarrays of length 'f each, comprised of the sensor elements {p+1, ..., p + f}, {p + 2, ..., p + f+1} with p>D and l<f<N-p. A sequence of least squares multi-step linear prediction problems are then solved simultaneously, based on the premise that a left subarray of length 'p' is sufficient to predict the output of the elements of a right subarray of length 'f for appropriately chosen values of p and f.

The kt h left and right subarray signal vectors r£ and

£ are constructed as

ii(t) = [rk(t), ..., rk.p + 1(t)]T, k = p, ..., N. (28)

= [rk+1(t), ..., rk+f(t)]T, k = 1, ..., N-f . (29)

interpreted as the summary of the left subarray data for the purpose of optimally predicting r£(t). The estimator of i£(t) from r£(t) may, thus, be viewed to be obtainable in two steps. In the first step, we obtain the n-dimensional state vector zk via the linear transformation T(n) as

= T(n)iftt), (30)

from which rk(t) may be estimated via the transformation M(n)

r*(t) , k = p, ...,N-f, (31)

where T(n) and M(n) are the appropriately chosen nxp and fxn matrices.

The transformation matrices T(n) and M(n) are determined so that the estimated right subarray data

r*(t) = M(n) T(n) rftt) , (32) has an associated error vector

<k = r*(t) - r*(t) , (33) with the smallest weighted sum square error, given by

N-f

= I

k = p

rR,rR>1e k, (34) where <rR, rR> is the weighting matrix taken to be equal to the averaged covariance matrix of the various right subarrays, i.e.,

N" N-f

<rR, i*> A I I r*(t) t = l k = p

/[(N-p-f+l)N' (35) The DOA estimation now proceeds as follows:

(i) Estimation of the Transformation Matrices M(n) and T(n)

We first defme the standardized vectors corresponding to the right and the left subarray data sets as:

rR _R>-%it J* v _ n Mf (36) where <rR, rR> * is the inverse of any matrix square root of the covariance matrix of the right subarrays as defmed in (15), and

:£ = [ < rL, rL> -i] ^ , k = p, ..., N-f , (37) where <rL, rL> is defined in a manner similar to that in (35). Now consider the predictor G of ?k given rk:

Grj;. (38)

An optimal choice of G, minimizing the weighted sum squared error J( in (34) is obtained as

G = The canonical variate technique is based on computing

a set of n-dimensional state vectors zk, which can be

<rR, rfe>"%]t <rR, rL> <rL, rL>"% . (39) It is interesting to note that the matrix G is a

(4)

spatially averaged cross-covariance matrix between the standardized left and right subarray vectors. The sum in (35) over the index k, therefore, may be interpreted as being equivalent to spatial averaging of Nj (=N- p-f +1) covariance matrices. We shall call this as the

"first layer" of spatial smoothing.

To find the optimal G under the constraint that it can be decomposed into the product M(n) T(n) of the transformation matrices, we factorize G using the SVDas

= u s v

f

,

(40)

where U and V are orthogonal matrices and E is a (fxp) matrix whose diagonal elements (S(i,i), i= 1,..., k=min(p,f)} are comprised of the singular values arranged in the descending order.

The SVD in (40) induces a family of possible factorizations for constructing the (fxp) predictor matrices G. „

The optimal rank n approximation G(n) of G, is then obtained by retaining the n significant singular values and corresponding singular vectors, i.e.,

G(n) = U(n) S(n) V*(n) , (41) with U(n) being the (fxn) submatrix containing the first n columns of U, V(n) the (pxn) submatrix containing first n columns of V and S(n) the upper left (nxn) submatrix of 2, where both f and p are selected to be greater than D.

From (38) and (41), the predicted data can, then, be written as

(42) Comparing (42) and (32), the optimal transformation matrices, M(n) and T(n) can be identified as

M(n) = [<rR, rR>%]t U(n) E(n) (43) and T(n) = Vf(n) [<rL, rL>"%]t , (44) In the situations when f < D, a rank D approximation to G now is clearly not defined. Even in this situation, a meaningful factorization of G does exist which may provide information about source directions [9].

A family of predictor matrices G^^ for m = 1,..., p, are computed such that

(i — IT y [V it (&K\

with Ufxf being the fxf submatrix containing the first f columns of U, V^ the pxm submatrix containing first m columns of V and S j ^ the upper left fxm submatrix of E.

Now, the optimal predictor matrix is determined as Gfv,,. Thus we can write

Substituting (36) and (37) in (46), we get ] t U w ^ [Vpxn]t r* =

(46)

(47)

matrices M(n) and T(n) can be identified as M(n) = [<rR, i*>*]t UM S ^ (48) andT(n) = [ V ^ j t [<rL, r S " * ] * . (49) (ii) Construction of the State Vector Sequence zk

The state vector sequence can now be constructed as

£k(t) = T(n) «k(0> for k = p, ..., N (50) The state vector set { zk, k = p,..., N-f} can be used to predict the data of the corresponding right subarrays, viz., {r£, k = p,..., N-f}. It is interesting to note that the state vectors { ZN-f+i> •••> ^N) c a n be u s e (^ to predict the data of the right subarrays {r^, k = N- f +1,..., N} which do not exist physically. This is a very useful observation because it provides us with the possibility of estimating the DOAs from the complete set of (N-p) state vectors. This avoids the loss of any array aperture in this method. This also helps in the resolution of coherent sources through an additional layer of spatial smoothing.

(iii) Computation of the State Transition Matrix F The given state model requires zk+1 = F zk (exactly), hence, we need to solve a least squares problem in order tp obtain an estimate of the state transition matrix F to fit the given sequence of state vectors into this model. This yields

<zx, z> - l (51)

where

N' N-l

= I I WO zkf(0 /[(N-p) N1] t = l k = p (52)

< z, z >

= I

N t = l

N-l

I zk(t) zkt(t) /[(N-p)N']

k = p (53)

Comparing (47) with (32), the optimal transformation 385

It is important to point out that this is equivalent to a spatial averaging of (N-p) state covariance matrices.

This, then forms a "second layer" of spatial smoothing which potentially helps in resolving upto N2 (= N-p) coherent sources. „

Once the matrix F is known, its eigenvalues give the estimates of the source directions as in (5).

Choice of the Parameters

The choice of parameters p and f plays a crucial role in obtaining an accurate estimate of the source directions here. D uncorrelated sources can be resolved with (D +1) elements by choosing p > D and f = 1. For resolution of coherent sources, we still need to choose p > D, but the number of coherent sources that can be detected is equal to (N-p).

IV. SIMULATIONS

Computer simulations were carried out to ascertain the performance of these methods as compared to that of covariance ESPRIT. In the presence of coherent sources, the recently proposed extension of ESPRIT for coherent sources [2] is used.

(5)

The simulation model consists of a 5 element linear array with two uncorrelated plane waves impinging from the directions 30° and 50° w.r.t. the normal to the array, to study the resolution performance of these algorithms. The SNR is selected to be 20 dB and the number of snapshots N' is taken as 50.

For the solution of this problem via the C.V.R., we choose p=3, f=l. The model order n is taken to be equal to D. Figs. 1 (a) - (e) show the scatter plots obtained over 25 independent runs, using the D.C.R., C.F.C.R., C.V.R. and the modified ESPRIT algorithm [2] respectively. It is seen that all the methods are able to resolve the source bearings accurately.

In the next set of simulations, the sources are assumed to be coherent. Figs. 2 (a)-(e) show the corresponding scatter plots for 25 independent runs.

Once again it is seen that all the methods are able to resolve the source bearings accurately.

The relative performance of these methods in terms of their increasing resolution capability are : CVR, CFCR, Modified ESPRIT and DCR.

V. CONCLUSIONS

In this paper, we have demonstrated the effectiveness of "state space modelling" of the sensor array data.

These techniques are applicable for the cases of both coherent as well as non-coherent sources.

VI. REFERENCES

[1] G.H. Golub and C.F. Van Loan, Matrix Computations. John Hopkins University Press, Baltimore, Maryland, 1985.

[2] B. Chandna and S.Prasad, "Application of ESPRIT to Coherent Source Direction Finding", Jr. of IETE. vol. 35, Mar-April 1989, pp. 78-84.

[3] R.K. Mehra, "On line System Identification of Linear Dynamical Systems", IEEE Trans, on Auto.

Control, vol AC-16, Feb'71, pp.12-21.

[4] S.Y. Kung, "State space and SVD Based Approx.

Methods for Harmonic Retr. Prob.", Jr. Optical Society of America, vol 73, Dec'83, pp.1799-1811.

[5] T J . Shan, M.Wax and T. Kailath, "On Spatial Smoothing for Direction-of-Arrival Estimation of Coherent Signals", IEEE Trans Acoustics Speech and Signal Processing, vol. 34, No. 4, August 1985, pp. 806-811.

[6] A. Paulraj, R.Roy and T.Kailath, "Estimation of Signal Parameters via Rotational Invariance Technique - ESPRIT", in Proc. Nineteenth Asimolar Conference on Circuits Systems and Computers. Asimolar, CA, Nov 1985.

[7] J.V. White, "Stochastic State Space Models from Empirical Data", Proc. Int. Conf. Acoustics Speech and Signal Processing. April 1983, pp 243-246.

[8] Akaike H., "Maximum Likelihood Identification of Gaussian ARMA Models", in System Identification : Advances and Case Studies.

(R.K.Mehra and D.G. Lainiotis), Academic Press, New York, 1976, pp 27-96.

[9] S. Prasad and B. Chandna, DOA Estimation using Stochastic Model Order Reduction via State Space Modeling, European Journal on Signal Processing (accepted for publication).

Fig. 1 : Scatter Plots obtained over 50 independent runs via (a) CVR, (b) CFCR, (c) DCR, (d) ESPRIT (Forward Smoothing) (e) ESPRIT (Forward-Backward Smoothing) in the presence of uncorrelated sources

Fig. 2 : Scatter Plots obtained over 50 independent runs via (a) CVR, (b) CFCR, (c) DCR, (d) ESPRIT (Forward Smoothing) (e) ESPRIT (Forward-Backward Smoothing) in the presence of coherent sources

References

Related documents

This section captures the perception of CLEAN members towards central and state government initiatives, areas of growth and opportunities in the sector, and requirement of

2020 was an unprecedented year for people and planet: a global pandemic on a scale not seen for more than a century; global temperatures higher than in a millennium; and the

SaLt MaRSheS The latest data indicates salt marshes may be unable to keep pace with sea-level rise and drown, transforming the coastal landscape and depriv- ing us of a

of their average January rainfall. The related flood- ing is considered to be the country’s worst in over 75 years, claiming 12 lives and causing substantial damages to

7HILE #HINESE AND WESTERN TOURISTS BUY SKINS FOR HOME DÏCOR AND #HINESE PEOPLE MAY ALSO BUY SKINS FOR GOOD LUCK +HAMPA 4IBETANS AND OTHER 4IBETAN COMMUNITIES BUY SKINS TO

To break the impasse, the World Bank’s Energy Sector Management Assistance Program (ESMAP), in collaboration with Loughborough University and in consultation with multiple

(b) Linear trend of temperature anomalies over time for the length of the record in (a) plotted versus pressure in °C decade −1 (blue line)... 3.4b) exhibits increases in the

From the RT measurement we found that the transition from normal state to superconductive state around 90 K, which also gives an extra supporting data towards