• No results found

On-line system identification using Chebyshev neural networks

N/A
N/A
Protected

Academic year: 2023

Share "On-line system identification using Chebyshev neural networks"

Copied!
5
0
0

Loading.... (view fulltext now)

Full text

(1)

On-line System Identification Using Chebyshev Neural Networks

S.Purwar , I.N.Kar, A.N.Jha Department of Electrical Engineering Indian Institute of Technology, Delhi

New delhi-110016. INDIA

E-mail: ink@ee,iitd.ernet.in, Tel: +91-11-26591073

Abslract-This paper proposes a computationally efficient artificial neural network (ANN) model for system identification of unknown dynamic nonlinear continuous and discrete time systems. A single layer functional link ANN is used for the model where the need of hidden layer is eliminated by expanding the input pattern by Chebyshev

•polynomials. These models are linear in their parameters.

The recursive least squares method with forgetting factor is used as on-line learning algorithm for parameter updation.

The good behaviour of the identification method is tested on two single input single output (SISO) continuous time plants and two discrete time plants. Stability of the identification scheme is also addressed.

Index terms-Identification, polynomials.

neural network, chebyshev

1. INTRODUCTION

In the last few years, a growing interest in the study of nonlinear systems in control theory has been observed. This interest stems from the need to give new solutions to some long standing necessities of automatic control; to work with more and more complex systems, to satisfy stricter design criteria, and to fulfill previous points with less and less a priori knowledge of the plant.

A new set of methods has been developed recently which apply artificial neural networks to the tasks of identification and control of dynamic systems. These works are supported by two of the most important capabilities of neural networks; their ability to learn [1] and their good performance for the approximation of nonlinear functions.

At present, most of the works on system identification using neural networks are based on multilayer feedforward neural networks with backpropogation learning or more efficient variations of this algorithm [4][5]. These methods have been applied to real processes and they have shown an adequate behaviour.

This paper presents the use of chebyshev neural network models (CNN) [3][6] to identify continuous as well as discrete time processes. Additionally, the identification

methods use on-line training unlike the offline training adopted in [6]. Also the training scheme is based on recursive least squares based algorithm.

This paper is organized as follows. The problem statement and chebyshev neural network whose weight updation is done using recursive least squares are presented in sections II and III. The simulation of continuous and discrete time systems is illustrated in section IV. Finally, section V summarizes the conclusions of the present work.

2. P R O B L E M STATEMENT

Consider a class of SISO nonlinear continuous time systems described by

where xeW.", f is locally lipschitz and h is continuous function where x is the system state, u is the input and y is the output. The method for system identification is depicted in Fig.l. The plant is excited by a signal u, and the output y is measured. The plant is assumed to be stable with known parameterization but with unknown values of the parameters. The objective is to construct a suitable identification model which when subjected to the same input u as the plant, produces an output which approximates y in the sense described by fly - y < e

1 — •

w

Plant

/

ANN

model

A

y

Fiq,1 identification scheme

for some desired e > 0 and a suitably defined norm.

The choice of the identification model and the method of adjusting its parameters based on the identification error constitute the two principal parts of the identification problem [2]. In the present study we also consider SISO

(2)

TENCON 2003/\ 116

discrete time plants described by the difference equations

+ g[u(k)Mk - \),...u(k - m + 1)]

= f[y{k\y(k-l),...yik-n + l)] +

where u(k) and y(k) represent the input and the output of the plant at the kth instant of time.

3 . C H E B Y S H E V N E U R A L N E T W O R K

3.1. Structure of CNN

The ANN structure used in this paper is a single layer Chebyshev Neural Network (CNN). CNN is a functional- link network (FLN) based on Chebyshev polynomials. The architecture of the CNN consists of two parts; namely, numerical transformation part and learning part [3].

Numerical transformation deals with the input to the hidden layer by approximate transformable method. The transformation is the functional expansion (FE) of the input pattern comprising of a finite set of Chebyshev polynomials.

As a result the Chebyshev polynomial basis can be viewed as a new input vector. For example, consider a two dimensional input pattern X=[xt x JT. An enhanced-pattern obtained by using Chebyshev functions is given by

» = [1 Th(x,) x2) T2(x2)...f ( 4 ) where Tj(x;) is a Chebyshev polynomial. The Chebyshev polynomials can be generated by the following recursive formula [3]

= 2xTL(x) - T T0(x)=l ( 5 )

FE

y

Fig.2. Structure of CNN

The different choices of T|(x) are x, 2x, 2x-l and 2 x + l . In this paper T,(x) is chosen as x. The network is shown in Fig.2. The output of the singie layer neural network is given by

y = WT<l> ( 6 )

where the weights of the neural network are given by W = [wi W2 . . . . ]T. A general nonlinear function / ( J C ) e C " ( S ) , x ( r ) e S' can be approximated by CNN as

f{x) = fVT</> + £ ( 7 ) where £ is the CNN functional reconstruction error vector.

In CNN, functional expansion of the input increases the dimension of the input pattern. Thus, creation of nonlinear decision boundaries in the multidimensional input space and approximation of complex nonlinear systems becomes easier [6], [7]. T h e output of the plant is Wr <p where W are the optimal weights of the neural network. Thus the error is defined as

e = {W-W)r(j> = Wr4, ( 8 ) Remark: In this paper, Chebyshev polynomials are used for functional expansion. Note that other basis functions like Legendre, Bessels or Trigonometric polynomials can also be used for functional expansion and trigonometric polynomials have been used for identification in [7].

3.2. Learning A Igorithm

As CNN is a single layered neural network it is linear in the weights. We shall use the recursive least squares method with forgetting factor as the learning algorithm for the purpose of on-line weight updation. The cost function to be minimized is given by

X | f (9)

The learning algorithm for the continuous time model is W =

P = if

0 otherwise

( 1 0 ) In terms of P" we have

dt ( 1 1 )

The algorithm for the discrete time model is given by

*(«) =

' (n)P(n-l)0(n)

( 1 2 )

P{n)= X

y

P{n-\)- r

x

k(n)f (ri)P(n-X)

where X is the forgetting factor and <j> is the basis function formed by the functional expansion of the input and P(0) = cl, c is a positive constant, H/^rjfls^, , ^ is a constant that serves an upper bound for \P(t)\ • All matrix and vectors are o r c o m p a t i b l e dimension for the purpose of computation.

(3)

3.3. Stability Analysis

By choosing an appropriate quadratic Lyapunov function it can be proved that the error is bounded and neural network weights W converges to W. For the continuous time case choose the Lyapunov function

Control Systems and Applications /1117

where g = 9.8, m = 2, I = I and v = 1.5. The input to the pendulum is the same as in the previous example. For the CNN the input is expanded to 9 verms using Chebyshev

V = —W 2

TP'XW

The derivative of the Lyapunov function is given by

which finally gives

V = — ere < 0 2

( 1 3 )

14)

( 1 5 ) This condition ensures e(t) -> 0 as t -» 0 . The stability of discfete time plants can be verified on the same lines by choosing a Lyapunov function

( 1 6 ) 4. SIMULATIONS

Extensive simulation studies were carried out with several examples of iionlinear dynamic systems in continuous and discrete time domain. Two examples in continuous time and two in discrete time domain are presented below.

Example 1: The Van der Vusse chemical stirred tank reactor

The Vander Vusse chemical stirred tank reactor (CSTR) can be described by the following set of differential equations

y = xz

where qc is the input to the system and represents flow rate, xi is the concentration of the chemical input and x2 is the concentration of the output chemical. The input to the reactor is chosen as u = 0.5sin(2t) + 0.2sin(4t). For the CNN the input is expanded to 9 terms using Chebyshev polynomials given by Eq.(5). The neural network weights are initialized to zero. The weights of the neural network are updated using the algorithm given by Eq.(lO). The results of the identification are shown in fig.3 where the solid line represents the neural network model output and the dashed line is the actual system output.

Example 2: The Inverted pendulum

The equations that describe the dynamics of the pendulum are given by

A2

—V

JC, - — sinx, +-

!m2 2 1 ml2

111 111

-plant .. CNN

0 B 10 1S JO 25 30 35 40 time(aec)

Fig. 3. Identification of CSTR

polynomials. Here also the neura! network weights are initialized to zero. The results of the identification are shown in fig 3 where the solid line represents the neural network model output and the dashed line is the actual system output. It can be seen from Fig.3 and Fig.4 that the identification of both the plants is satisfactory and the error is bounded.

Fig,4. Identification of inverted pendulem

Example 3:

We consider a discrete time plant described by

y(k +1) = 0.3y(k)+ 0.6

f sin(2/rA / 250) for 0<k < 250 u(k) = {

10.8sin(2/nt/250)for0<k<250

(4)

TENCON 2003/W18

• \ f \ l \

i .

3 a'

1 j

c _ •

K ' '

- 4 !

y

• | f ; j

": I 1 •

H i 1

:

i ii ;

: j

\ |

\ i 1

- plant

« . ..CNN i .".^

:' if ': ' etTOr :'"»J ;

fj

i i :1 ;

: :

\ I\ i

} \

\

I

\ L - -el— V

o I

100 200 300 400 500 600 700 800 900 1000 Discrete time.k

Fig. 5. Identification of discrete plant (Example 3) where the nonlinear function g is unknown. To identify the plant, the model is governed by the difference equation given by y(k + ]). For the CNN, the input is expanded to 14 terms using Chebyshev polynomials and the input to the actual system and the neural network model is given by u(k). The neural network weights are initialized to zero.The weights of the neural network are updated using the algorithm given by'Eq.(12). The results of the identification are shown in fig 5 and the identification error is very small.

Example 4:

Here, the plant is described by the following difference equation

where the unknown functions f and g are

= u(u-0.5)(w The model for identification is given by

-JK*+ 1) = *,[><(*)] +JV2[n(*)]

where N| and N2 are the two CNN's used to approximate the nonlinear functions f and g respectively. The input to the neural network N| is y(k) which is expanded to 7 terms and the input to network

N2 is u(k) which is also expanded to 7 terms. Both N| and N2are represented by {7-1} structure. The input to the plant is the same as in the previous example. It can be seen from Fig.6 that the outputs of the plant and the model are almost the same and the error is very small

5. CONCLUSION

In this paper, we have presented identification schemes in a feedforward neural network framework that ensures identification of general nonlinear dynamical systems. Our proposed scheme, firstly does not need any off-line training, secondly requires no initialization of neural network weights. Initially the neural network weights are assumed to be zero. As the neural network structure is single layer network, it is computationally fast and simple. It is important to remark that we use continuous and discrete time models that capture the dynamics of the systems through the updation of the weights of the neural network model by recursive least squares algorithm. The simulation results verify the efficacy of the identification scheme.

Work is currently underway to control the nonlinear systems and to replace the nonlinear systems with identification with a control scheme.

REFERENCES

[1] S.Haykins, Neural Networks. Ottawa,Canada: Maxwell Macmillan,1994.

[2] CKhambhampati, RJ.Craddock, M.Tham, K.Warwick,"

Inverse model control using recurrent networks,"

Mathematics and computers in simulation", pp. 181-199, 2000.

! .'•

•0.51 0

foe e Si!

i

V

1 I

-. plant -CNN ,. error

Si!

100 200 300 400 500 600 700 BOO 900 1000 Discrete time, k

Fie. 6. identification of discrete plant (Example 4*)

\

[3] T.T.Lee and J.T.Jeng," The chebyshev polynomial based unified mode! neural networks for function approximations," IEEE Trans. Systems, Man & Cybernetics, Part-B, vol.28, pp.925-935, 1998.

[4] W.T.Miller, R.S.Sutton, P.J.Werbos, Neural networks for control, Cambridge, MA:MIT Press, 1990.

[5] K.S.Narendra and K.Parthasarthy," Identification and control of dynamical systems using neura! networks," IEEE Trans. Neural networks, vol.1, pp.4-26,1990.

[6] J.C.Patra, A.C.Kot," Nonlinear dynamic system identification using chebyshev functional link artificial neural networks," IEEE Trans. Systems, Man &

Cybernetics, Part-B, vol.32, no.4, pp.505-511,2002.

(5)

Control Systems and Applications /1119

[7] J.C.Patra, R.N.Pal, B.N.Chatterji and G.Panda

"Identification of nonlinear dynamic systems using functional link artificial neural networks" IEEE Trans.

Systems, Man & Cybernetics, Part B, vol.29, no.2, pp.254 -262,1999.

References

Related documents

SaLt MaRSheS The latest data indicates salt marshes may be unable to keep pace with sea-level rise and drown, transforming the coastal landscape and depriv- ing us of a

In a slightly advanced 2.04 mm stage although the gut remains tubular,.the yent has shifted anteriorly and opens below the 11th myomere (Kuthalingam, 1959). In leptocephali of

INDEPENDENT MONITORING BOARD | RECOMMENDED ACTION.. Rationale: Repeatedly, in field surveys, from front-line polio workers, and in meeting after meeting, it has become clear that

The synchro transmitter – control transformer pair thus acts as an error detector giving a voltage signal at the rotor terminals of the control transformer proportional to the

Angola Benin Burkina Faso Burundi Central African Republic Chad Comoros Democratic Republic of the Congo Djibouti Eritrea Ethiopia Gambia Guinea Guinea-Bissau Haiti Lesotho

We have implemented three models such as Radial Basis Function Neural Network (RBFNN) model, Ensemble model based on two types Feed Forward Neural Networks and one Radial Basis

The scan line algorithm which is based on the platform of calculating the coordinate of the line in the image and then finding the non background pixels in those lines and

 Single Layer Functional Link Artificial Neural Networks (FLANN) such as Chebyshev Neural Network (ChNN), Legendre Neural Network (LeNN), Simple Orthogonal Polynomial