Show that the following functions satisfy the Lipschitz condition in the indicated rectangle and find the corresponding Lipschitz constants. Show that the following functions do not satisfy the Lipschitz condition in the indicated area.
Picard’s Successive Approximations
In the next section, we show that the sequence {xn} converges to a unique solution of (1.5) provided f satisfies the Lipschitz condition. Before we end this section, let's take a few examples. Let us note that xn is the nth partial sum of the power series of e−t.
Picard’s Theorem
We know that IVP (1.5) corresponds to the integral equation (1.6), and it is sufficient to show that the successive approximations xn converge to a unique solution of (1.6) and thus to the unique solution of IVP (1.5) . But x(t) = t4 is yet another solution of IVP which contradicts the conclusion of Picard's theorem, and so Picard's theorem may not hold in case the Lipschitz condition on f is dropped entirely.
Continuation And Dependence On Initial Conditions
We note that this is a solution of (1.5) onh2 ≤t≤h2+α and so it only remains to verify that z0 is continuous at the pointt=h2. Indeed, the Gronwall inequality has many more applications in the qualitative theory of differential equations that we will see later.
Existence of Solutions in the Large
Furthermore, the convergence of the sequence {xn} is uniform, implying that the limit x is continuous. Remark: The example cited at the beginning of this section does not contradict Theorem 1.5.1 iff(t, x) =x2 satisfies the strip condition f ∈ Lip(S, K).
Existence and Uniqueness of Solutions of Systems
The subsequent lemma states that under the stated conditions the successive approximations are indeed well defined. Then the successive approximations defined by (1.43) converge uniformly on I =|t−t0| ≤h for a unique solution of IVP(1.40).
Cauchy-Peano Theorem
It is not difficult to show that {xn} is uniformly continuous and uniformly bounded by Ih. By an application of the Ascoli-Arzela theorem, {xn} has a subsequence {xnk} that converges uniformly (to say x) on Ih.
Introduction
Linear Dependence and Wronskian
Thus, if two functions are linearly dependent on an intervalI, one of them is a constant multiple of the other. Since x1 is a multiple of x2, the two functions are linearly dependent on R. sintand costs are linearly independent of the interval I = [0,2π].
Basic Theory for Linear Equations
Thus, a set of three independent linear solutions exists for a homogeneous linear equation of the third order. Suppose that xp is any particular solution of (2.12) existing onI and that xh is the general solution of the homogeneous equation L(x) = 0 onI.
Method of Variation of Parameters
Now substituting the values of u1 and u2 in (2.16) we get a desired specific solution of the equation (2.14). Find the general solution of x000+x00+x0+x= 1 given that cost,sintande−tare are three linearly independent solutions of the corresponding homogeneous equation.
Homogeneous Linear Equations with Constant Coefficients
This means that the real part and the imaginary part of a solution are also solutions of the equation (2.33). We now consider when roots of (2.39) have multiplicity (real or complex). i) when a real root has a multiplicitym1, (ii) when a complex root has a multiplicitym1. Thus, if all the roots of the characteristic equation (2.39) are known, regardless of whether they are simple or multiple roots, there are linearly independent solutions, and the general solution of (2.34) is.
In order to find the general solution of an inhomogeneous equation, it is often necessary to know the specific solution of the given equation.
Introduction
Systems of First Order Equations
In the example above, we saw a concise way of writing a system of two equations in vector form. A vector representation of x or f is compatible when linear systems of equations are in focus. The system of three equations is given by i) show that the above system is linear in x1, x2 and x3; (ii) find a solution to the system.
Find an upper bound for f( on the rectangle. Display the linear system of equations. i) Prove that x1 satisfies the second order equation. ii) Show that the above equation has a solution.
Fundamental Matrix
The first part of the theorem is a single consequence of Theorem 3.3.2 and the fact that the product of nonsingular matrices is nonsingular. By the uniqueness theorem there exists a unique fundamental matrix Φ(t) for the given system such that Φ(0) = E. Let Φ be a fundamental matrix for (3.14) and C is any constant non-singular matrix then, in general, show that CΦ need not be a fundamental matrix.
Then show that Ψ is a fundamental matrix of its adjoint (3.23) if and only if ΨTΦ = C, where C is a constant non-singular matrix.
Non-homogeneous linear Systems
Linear Systems with Constant Coefficients
Again, we recall from Theorem 3.5.1 that we know that the general solution of the system (3.32) is etAc. Let λ1 be an eigenvalue of A and corresponding to this eigenvalue, let c1 be the non-trivial solution of (3.37). The next step is to find the nature of the fundamental matrix in the case of repeated eigenvalues of A.
Namely, since the rank of the co-coefficient matrix A is 2, there is only one eigenvector.
Phase Portraits-Introduction
Phase Portraits in R 2 (continued)
In general, it is easy to write/draw the phase portrait of (3.49) when A in its canonical form. At this point it is clear that the phase portrait of (3.49) is the phase portrait of (3.51) under the transformation x=P y. In the case of λ≥µ >0 or µ≥λ >0, the phase portrait remains essentially the same as shown in Figure 3.5, except that the direction of the arrows is reversed.
We also note that the phase portraits for (??) are a family of ellipses as shown in Figure 8.
Introduction
Although the proof of Theorem 4.1.5 is elementary, the conclusion greatly simplifies the subsequent work. It thus follows that x1 and x2 are linearly dependent, which is a contradiction to the hypothesis, otherwise elsex1 and x2 cannot have common zeros. Since the derivative of x is continuous and positive at t = a, it follows that x is strictly increasing in some quarter of t= a, meaning that t = a is the only zero of x in that quarter.
Prove that equation (4.2) is nonoscillatory if and only if equation (4.3) is nonoscillatory.
Sturm’s Comparison Theorem
We note that between each successive zero of a solutionx(i (i), each solution of (ii) does not accept a zero. Assuming the hypotheses of Theorem 4.2.1, let us pose a question: is it true that between each two zeros of a solution y of equation (4.5) there is one zero of a solution x of equation (4.4) This clearly shows that, according to the hypotheses of theorem 4.2.1, there must not exist a zero x between two consecutive zeros.
Show that the normal form of Bessel's equation. ii) Ifp= 12 then prove that every zero of eJp(t) is at a distance of π from its successive zero.
Elementary Linear Oscillations
It is not possible to drop the condition (4.12), as shown in the following example. Check for the oscillations or non-oscillations of:. i) Show that Bessel's equation is oscillatory for all values of p.
Boundary Value Problems
In this case, it is in no way implied whether such a solution exists or not. ii) An example of linear homogeneous GDP is Boundary conditions x(A) =x(B) and x0(A) =x0(B). are commonly known as periodic boundary conditions specified at t=A and t=B. Regular linear BVP) A linear BVP, homogeneous or inhomogeneous, is called a regular BVP if A and B are finite and in addition a(t) 6= 0 for all t in (A, B). A careful analysis of the above definition shows that non-linearity can be introduced in GDP because. i) the differential equation can be non-linear;. ii) the given differential equation may be linear, but the boundary conditions may not be linearly homogeneous.
State with reasons whether the following GDPs are linearly homogeneous, linearly inhomogeneous or non-linear.
Sturm-Liouville Problem
Two functions x and y (sufficiently smooth), defined and continuous on [A, B], are said to be orthogonal with respect to the continuous weighting function if. Now the boundary conditions (4.18) and (4.19) or (4.20) play a central role in the desired orthogonality of the eigenfunctions. Moreover, let xm and xn be two eigenfunctions of BVP (4.17) and (4.18) and (4.19) corresponding to two different eigenvalues λm and λn.
It is easy to show that the eigenfunctions are ii) Let the Legendre polynomials Pn(t) be the solutions of the Legendre equation.
Green’s Functions
With this choice of c1 and c2, G(t, s) defined by relation (4.35) has all the properties of the Green's function. Condition (iv) in the definition of Green's function now shows that the value of expression (4.48) is x(t).
Introduction
Linear Systems with Constant Coefficients
Every solution of equation (5.1) tends to zero at t→+∞ if and only if the real parts of all eigenvalues of A are negative. The behavior of x for large values of t depends on the sign of the constant ϕ and the function b. Thus, the behavior of the solution for large values often depends on theq and on L.
Determine the limit of the solutions ast→+∞for the solutions of the systemx0=Ax where.
Linear Systems with Variable Coefficients
Upper bounds on the inverse of a fundamental matrix are useful for the study of boundedness of solutions. Theorem 5.3.5 stated below deals with a criterion for the boundedness of the inverse of a fundamental matrix. It is interesting to note that Theorem (5.3.5) can be used to study the behavior of solutions of equations of the form The following is a result on boundedness of solutions of (5.23) as a consequence of the variation of parameter formula.
The behavior of the solutions of such a system (5.25) is closely related to the behavior of the solution of the system (5.17).
Second Order Linear Differential Equations
We devote the rest of the module to introducing the concept of stability of solutions. For each perturbation of the pendulum a new movement is obtained which is also the solution of the system. As stated earlier, this chapter is devoted to the study of stability of stationary solutions of systems described by ordinary differential equations.
It is important to note that the transformation (5.33) does not change the nature of the stability of a solution of (5.32).
Stability of Linear and Quasi-linear Systems
As a first step, we obtain the necessary and sufficient conditions for the stability of the linear system (5.36). The result given below concerns the asymptotic stability of the null (or null) solution of system (5.36). We saw earlier that if the characteristic roots of the matrix A have negative real parts, then every solution of (5.36) tends to zero as t.
Prove that all solutions of the system (5.36) are stable if and only if they are bounded.
Stability of Autonomous Systems
Since z≥0 for all (x1, x2), the surface will always lie in the upper part of planetOX1X2. If there exists a positive definite function V such that V˙ ≤ 0, then the origin of the system (5.54) is stable. If in Sρ there exists a positive definite function V such that (−V˙) is also positive definite, the origin of equation (5.54) is asymptotically stable.
If f is positive definite in some neighborhood of the origin, then the origin is asymptotically stable.
Stability of Non-autonomous Systems
We are now ready to prove the fundamental theorems about the stability of the system's equilibrium (5.32). For a large value, the right-hand side of (5.59) becomes negative, which contradicts the fact that V is positive definite. The equilibrium state of (5.32) is asymptotically stable in the large if there exists a positive definite function V(t, x) that is decreasing everywhere and such that V(t, x)→ ∞ if |x| → ∞ for each t∈I and such that V˙ is negative definite.
Ifaii<0 for all values often then it is seen that ˙V(x(t))<0 which implies asymptotic stability of the origin of the given system.
A Particular Lyapunov Function
Remark: The stability properties of the zero solution of equation (5.62) are not affected if the system (5.60) is transformed by the relationix=P y, where P is a constant non-singular matrix. Thus, for each between (−16,−6) the matrix is positive definite and therefore, the zero solution of the system is asymptotically stable. For the asymptotic stability of the zero solution system (5.64), the function f naturally has a role to play.
We expect that if iff is small, then the zero solution of the system (5.64) may be asymptotically stable.