Second- and higher-order equations

Henry J. Ricardo , in A Modern Introduction to Differential Equations (Third Edition), 2021

4.6.1 An Existence and Uniqueness Theorem

At this point we have seen that the possibilities for second-order IVPs are similar to those we saw in Section 2.8 for first-order IVPs. We can have no solution, infinitely many solutions, or exactly one solution. Once again we would like to determine when there is one and only one solution of an IVP.

The simplest Existence and Uniqueness Theorem (EUT) for second-order differential equations is one that is a natural extension of the result we saw in Section 2.8.

Existence and Uniqueness Theorem (EUT)

Suppose we have a second-order IVP d 2 y d t 2 = f ( t , y , y ˙ ) , with y ( t 0 ) = y 0 and y ˙ ( t 0 ) = y ˙ 0 . If f, f y , and f y ˙ are continuous in a closed box B in three-dimensional space (t-y- y ˙ space) and the point ( t 0 , y 0 , y ˙ 0 ) lies inside B, then the IVP has a unique solution y ( t ) on some t-interval I containing t 0 .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128182178000117

First-order differential equations

Henry J. Ricardo , in A Modern Introduction to Differential Equations (Third Edition), 2021

2.8.1 An Existence and Uniqueness Theorem

For first-order differential equations the answers to the existence and uniqueness questions we have just posed are fairly easy. We have an Existence and Uniqueness Theorem —simple conditions that guarantee one and only one solution of an IVP.

Existence and Uniqueness Theorem

Let R be a rectangular region in the x-y plane described by the two inequalities a x b and c y d . Suppose that the point ( x 0 , y 0 ) is inside R. Then if f ( x , y ) and the partial derivative f y ( x , y ) are continuous functions on R, there is an interval I centered at x = x 0 and a unique function y ( x ) defined on I such that y is a solution of the IVP y = f ( x , y ) , y ( x 0 ) = y 0 .

This statement may seem a bit abstract, but it is the simplest and probably the most widely used result that guarantees the existence and uniqueness of a solution of a first-order IVP. Using this theorem is simple. Take your IVP, write it in the form y = f ( x , y ) , y ( x 0 ) = y 0 , and then examine the functions f ( x , y ) and f y , the partial derivative of f with respect to the dependent variable y. (Section A.7 has a quick review of partial differentiation.)

Fig. 2.39 gives an idea of what such a region R and interval I in the Existence and Uniqueness Theorem may look like.

Figure 2.39

Figure 2.39. Region of existence and uniqueness: R = { ( x , y ) | a x b , c y d }

It's important to note the following comments about this fundamental theorem:

1.

If the conditions of our result are satisfied, then solution curves for the IVP can never intersect. (Do you see why?)

2.

If f ( x , y ) and f / y happen to be continuous for all values of x and y, our result does not say that the unique solution must be valid for all values of x and y.

3.

The continuity of f ( x , y ) and f / y are sufficient for the existence of solutions, but they may not be necessary to guarantee their existence. This means that you may have solutions even if the continuity condition is not satisfied.

4.

This is an existence theorem, which means that if the right conditions are satisfied, you can find a solution, but you are not told how to find it. In particular, you may not be able to describe the interval I without actually solving the differential equation.

The significance of these remarks will be explored in some of the following examples and in some of the problems in Exercises 2.8. First, let's apply the Existence and Uniqueness Theorem to IVPs involving first-order linear ODEs. In the last section of this chapter we'll sketch a proof of this important result.

Example 2.8.3 Any "Nice" Linear IVP Has a Unique Solution

Because linear equations model many important physical situations, it's important to know when these equations have unique solutions. We show that if P ( x ) and Q ( x ) are continuous ("nice") on an interval ( a , b ) containing x 0 , then any IVP of the form d y d x + P ( x ) y = Q ( x ) , y ( x 0 ) = y 0 , has one and only one solution on ( a , b ) .

In terms of the Existence and Uniqueness Theorem, we have

f ( x , y ) = P ( x ) y + Q ( x ) and f y ( x , y ) = P ( x ) .

But both P ( x ) and Q ( x ) are assumed continuous on the rectangle R = { ( x , y ) | a x b , c y d } for any values of c and d, and f ( x , y ) is a combination of continuous functions. (There are no values of x and y that give us division by zero or an even root of a negative number, for example.) The conditions of the theorem are satisfied, and so any IVP of the form described previously has a unique solution.

In Section 2.2 we showed how to find a solution of a linear differential equation explicitly. Now we see, given an appropriate initial condition, that we have learned how to find the unique solution.

Now let's go back and re-examine some examples we discussed earlier.

Example 2.8.4

Example 2.8.1 Revisited

Assume that x is a function of the independent variable t. If we look at the IVP x = 1 + x 2 , x ( 0 ) = x 0 , in light of the Existence and Uniqueness Theorem, we see that f ( t , x ) = 1 + x 2 , a function of x alone that is clearly continuous at all points ( t , x ) , and f x = 2 x , also continuous for all ( t , x ) .

The conditions of the theorem are satisfied, and so the IVP has a unique solution. But even though both f ( t , x ) and f x are continuous for all values of t and x, we know that any unique solution is limited to an interval

( ( 2 n 1 ) π 2 , ( 2 n + 1 ) π 2 ) , n = 0 , ± 1 , ± 2 , ± 3 , ,

separating the consecutive vertical asymptotes of the tangent function. (Go back and look at the one-parameter family of solutions for the equation, and see comment 2 that follows the statement of the Existence and Uniqueness Theorem.)

Next, we reexamine Example 2.8.2 in light of the Existence and Uniqueness Theorem.

Example 2.8.5

Example 2.8.2 Revisited

Here, we have the form x = x 2 / 3 = f ( x ) , with x ( 0 ) = 0 , so we must look at f ( x ) and f x . But f x = f ( x ) = 2 3 x 1 / 3 = 2 3 x 3 , which is not continuous in any rectangle in the t-x plane that includes x = 0 (that is, any part of the t-axis). Therefore, we shouldn't expect to have both existence and uniqueness on an interval of the form [ 0 , β ]—and in fact we don't have uniqueness, as we have seen.

However, if we avoid the t-axis—that is, if we choose an initial condition x ( t 0 ) = x 0 0 —then the Existence and Uniqueness Theorem guarantees that there will be a unique solution for the IVP. Fig. 2.40a shows the slope field for the autonomous equation x = x 2 / 3 in the rectangle 1 t 5 , 0 x 3 . This rectangle includes part of the t-axis, and it is easy to visualize many solutions starting at the origin, gliding along the t-axis for a little while, and then taking off. Fig. 2.38 shows some of these solution curves.

Figure 2.40a

Figure 2.40a. Slope field for x′ =x 2/3, −1 ≤t ≤ 5, 0 ≤x ≤ 3

Fig. 2.40b, on the other hand, shows what happens if we choose a rectangle that avoids the t-axis. It should be clear that if we pick any point ( t 0 , x 0 ) in this rectangle, there will be one and only one solution of the equation that passes through this point.

Figure 2.40b

Figure 2.40b. Slope field for x′ =x 2/3, −1 ≤t ≤ 5, 1 ≤x ≤ 3

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128182178000099

First Order Ordinary Differential Equations

Martha L. Abell , James P. Braselton , in Introductory Differential Equations (Fifth Edition), 2018

Chapter 2 Summary: Essential Concepts and Formulas

Existence and Uniqueness Theorems

In general, if f and f / y are continuous functions on the rectangular region R: a < t < b , c < y < d containing the point ( t 0 , y 0 ) , then there exists an interval | t t 0 | < h centered at t 0 on which there exists one and only one solution to the initial value problem y = f ( t , y ) , y ( t 0 ) = y 0 . Special cases of the theorem are stated in Section 2.2 for separable equations, Section 2.3 for linear equations, and Section 2.4 for exact equations.

Separable Differential Equation

A differential equation that can be written in the form g ( y ) y = f ( t ) or g ( y ) d y = f ( t ) d t is called a separable differential equation.

First Order Linear Differential Equation

A differential equation that can be written in the form d y / d t + p ( t ) y = q ( t ) is called a first order linear differential equation.

Integrating Factor

An integrating factor for the first order linear differential equation is μ ( t ) = e p ( t ) d t .

Exact Differential Equation

A differential equation that can be written in the form M ( t , y ) d t + N ( t , y ) d y = 0 where M ( t , y ) d t + N ( t , y ) d y = f t ( t , y ) d t + f y ( t , y ) d y for some function z = f ( t , y ) is called an exact differential equation.

Bernoulli Equation

A differential equation of the form y + p ( t ) y = q ( t ) y n .

Homogeneous Equation

A differential equation that can be written in the form M ( t , y ) d t + N ( t , y ) d y = 0 is homogeneous of degree n if both M ( t , y ) and N ( t , y ) are homogeneous of degree n.

Numerical Methods

Numerical methods include computer assisted solution, Euler's method, the improved Euler's method, and the Runge-Kutta method. Other methods are typically discussed in more advanced numerical analysis courses.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128149485000021

SOME FUNDAMENTAL TOOLS AND CONCEPTS FROM NUMERICAL LINEAR ALGEBRA

BISWA NATH DATTA , in Numerical Methods for Linear Control Systems, 2004

3.5 NUMERICAL SOLUTION OF THE LINEAR SYSTEM Ax = b

Given an n × n matrix A and the n-vector b, the algebraic linear system problem is the problem of finding an n-vector x such that Ax = b.

The principal uses of the LU factorization of a matrix A are: solving the algebraic linear system Ax = b, finding the determinant of a matrix, and finding the inverse of A.

We will discuss first how Ax = b can be solved using the LU factorization of A.

The following theorem gives results on the existence and uniqueness of the solution x of Ax = b. Proof can be found in any linear algebra text.

Theorem 3.5.1.

Existence and Uniqueness Theorem. The system Ax = b has a solution if and only if rank (A) = rank(A, b). The solution is unique if and only if A is invertible.

3.5.1 Solving Ax = b using the Inverse of A

The above theorem suggests that the unique solution x of Ax = b be computed as x = A −1 b.

Unfortunately, computationally this is not a practical idea. It generally involves more computations and gives less accurate answers.

This can be illustrated by the following trivial example:

Consider solving 3x = 27.

The exact answer is: x = 27/3 = 9. Only one flop (one division) is needed in this process. On the other hand, if the problem is solved by writing it in terms of the inverse of A, we then have x = 1 3 × 27 = 0.3333 × 27 = 8.9991 (in four digit arithmetic), a less accurate answer. Moreover, the process will need two flops: one division and one multiplication.

3.5.2 Solving Ax = b using Gaussian Elimination with Partial Pivoting

Since Gaussian elimination without pivoting does not always work and, even when it works, might give an unacceptable answer in certain instances, we only discuss solving Ax = b using Gaussian elimination with partial pivoting.

We have just seen that Gaussian elimination with partial pivoting, when used to triangularize A, yields a factorization PA = LU. In this case, the system Ax = b is equivalent to the two triangular systems:

Thus, to solve Ax = b using Gaussian elimination with partial pivoting, the following two steps need to be performed in the sequence.

Step 1.

Find the factorization PA = LU using Gaussian eliminating with partial pivoting.

Step 2.

Solve the lower triangular system: Ly = Pb = b′ first, followed by the upper triangular system: Ux = y.

Forming the vector b′. The vector b′ is just the permuted version of b. So, to obtain b′, all that needs to be done is to permute the entries of b in the same way as the rows of the matrices A (k) have been interchanged. This is illustrated in the following example.

Example 3.5.1.

Solve the following system using Gaussian elimination with partial pivoting:

x 1 + 2 x 2 + 4 x 3 = 7 , 4 x 1 + 5 x 2 + 6 x 3 = 15 , 7 x 1 + 8 x 2 + 9 x 3 = 24.

Here

Using the results of Example 3.4.1, we have

L = ( 1 0 0 1 7 1 0 4 7 1 2 1 ) , U = ( 7 8 9 0 6 7 19 7 0 6 7 19 7 0 0 1 2 ) .

Since r 1 = 3, and r 2 = 3,

Note that to obtain b′, first the 1st and 3rd components of b were permuted, according to r 1 = 3 (which means the interchange of rows 1 and 3), followed by the permutation of the components 2 and 3, according to r 2 = 3 (which means the interchange of the rows 2 and 3). Ly = b′ gives

and Ux = y gives

Flop-count. The factorization process requires about 2 3 n 3 flops. The solution of each of the triangular systems Ly = b′ and Ux = y requires n 2 flops. Thus, the solution of the linear system Ax = b using Gaussian elimination with partial pivoting requires about 2 3 n 3 + O ( n 2 ) flops. Also, the process requires O(n 2) comparisons for pivot identifications.

Stability of Gaussian Elimination Scheme for Ax = b

We have seen that the growth factor ρ determines the stability of the triangularization procedure. Since solutions of triangular systems are numerically stable procedures, the growth factor is still the dominating factor for solving linear systems with Gaussian elimination.

The large growth factor ρ for Gaussian elimination with partial pivoting is rare in practice. Thus, for all practical purposes, Gaussian elimination with partial pivoting for the linear system Ax = b is a numerically stable procedure.

3.5.3 Solving a Hessenberg Linear System

Certain control computations such as computing the frequency response of a matrix (see Chapter 5) require solution of a Hessenberg linear algebraic system. We have just seen that the LU factorization of a Hessenberg matrix requires only O(n 2) flops and Gaussian elimination with partial pivoting is safe, because, the growth factor in this case is at most n. Thus, a Hessenberg system can be solved using Gaussian elimination with partial pivoting using O(n 2) flops and in a numerically stable way.

3.5.4 Solving AX = B

In many practical situations, one faces the problem of solving multiple linear systems: AX = B. Here A is n × n and nonsingular and B is n × p. Since each of the systems here has the same coefficient matrix A, to solve AX = B, we need to factor A just once. The following scheme, then, can be used.

Partition B = (b 1,…, b p ).

Step 1.

Factorize A using Gaussian elimination with partial pivoting: PA = LU

Step 2.

For k = 1,…, p do

Solve Ly = Pb k

Solve Ux k = y

End

Step 3.

Form X = (x 1,…, x p ).

3.5.5 Finding the Inverse of A

The inverse of an n × n nonsingular matrix A can be obtained as a special case of the above method. Just set B = I nxn . Then, X = A −1.

3.5.6 Computing the Determinant of A

The determinant of matrix A can be immediately computed, once the LU factorization of A is available. Thus, if Gaussian elimination with partial pivoting is used giving PA = LU, then det ( A ) = ( 1 ) r i = 1 n u i i , where r is the number of row interchanges in the partial pivoting process.

3.5.7 Iterative Refinement

Once the system Ax = b is solved using Gaussian elimination, it is suggested that the computed solution be refined iteratively to a desired accuracy using the following procedure. The procedure is fairly inexpensive and requires only O(n 2) flops for each iteration.

Let x be the computed solution of Ax = b obtained by using Gaussian elimination with partial pivoting factorization: PA = LU.

For k = 1, 2,…, do until desired accuracy.

1.

Compute the residual r = bAx (in double precision).

2.

Solve Ly = Pr for y.

3.

Solve Uz = y for z.

4.

Update the solution xx + z.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780122035906500070

A Functional-Differential System of Neutral Type Arising in a Two-Body Problem of Classical Electrodynamics

RODNEY D. DRIVER , in International Symposium on Nonlinear Differential Equations and Nonlinear Mechanics, 1963

2.2 Theorem

(Extended Existence and Uniqueness Theorem for the Two-Body Problem Without Radiation Reaction). Let each r i (t) (i = 1, 2) satisfy the conditions imposed on it in problem 2 for α ≤ tt 0 and let each v i ( t ) be Lipschitz continuous on each of the open intervals ( t k , t k + 1 ) ( k = p , , 1 ) Also let

(1)

r 1 ( t 0 ) r 2 ( t 0 ) ,

(2)

| v i ( t ) | < c for undefined α t t 0 , i = 1 , 2 , and

(3)

equations (2.2) have a solution τ ji (t 0), at t 0 for (j, i) = (2, 1) and (1,2).

Then problem 2 has a unique solution for α ≤ t < β, where either β = + or else

lim t β 0 r 1 ( t ) = lim t β 0 r 2 ( t )

–a collision.

Method of Proof

One defines an appropriate problem 1, based on the system composed of (2.3) and (2.4), and shows that this problem 1 has a unique solution. One then shows an equivalence between the problem 1 and problem 2 under consideration, which implies the existence of a unique solution for problem 2. To show that the solution does indeed exist until a collision occurs, one must prove, among other things, that no delays, τ ji (t), approach zero and that no speeds approach the speed of light in a finite time unless there is a collision. This latter calculation depends upon the specific form of (2.3) and hence requires a more detailed statement of the equations than is presented in this paper.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123956514500519

Topics on Hydrodynamics and Volume Preserving Maps

Yann Brenier , in Handbook of Mathematical Fluid Dynamics, 2003

1.6 Nonexistence of solutions for the SPP

A local existence and uniqueness theorem for the SPP can be found in Ebin and Marsden paper [20]: if h and I are sufficiently close in a sufficiently high order Sobolev norm, then there is a unique shortest path. In large, uniqueness can fail for the SPP. For example, in the case when D is the unit disk, g 0(x) = x = −g 1(x), the SPP has two solutions, g ( t , x ) = x e + i π t and g ( t , x ) = x e i π t , where complex notations are used.

In 1985, A. Shnirelman [40] found, in the case D = [0, 1]3, a class of data, that we will call "Shnirelman's class", for which the global SPP cannot have a (classical) solution. These data h are of the form

h ( x 1 , x 2 , x 3 ) = ( H ( x 1 , x 2 ) , x 3 ) ,

where H is an area-preserving mapping of the unit square, i.e., an element of G([0, 1]2), such that

δ [ 0 , 1 ] 3 ( I , h ) < δ [ 0 , 1 ] 2 ( I , H ) < +

(which means that the Action can be reduced if the third dimension motion is used). Indeed, let us consider a smooth curve g connecting I and h on G([0, 1]3), generated by some smooth time-dependent divergence-free vector field u(t, x), parallel to the boundary of D. Then, Shnirelman shows that there is such a curve g ˜ satisfying

A [ 0 , 1 ] 3 ( g ˜ ) < A [ 0 , 1 ] 3 ( g ) .

The new trajectory g ˜ can be roughly obtained in two steps. First, u is rescaled by squeezing its vertical component (with symmetry with respect to x 3 = 1/2)

u ˜ i ( t , x ) = u i ( t , x 1 , x 2 , 2 x 3 ) , i = 1 , 2 , u 3 ( t , x ) = 1 2 u 3 ( t , x 1 , x 2 , 2 x 3 ) ,

for 0 < x 3 < 1/2, and

u ˜ i ( t , x ) = u i ( t , x 1 , x 2 , 2 2 x 3 ) , i = 1 , 2 , u ˜ 3 ( t , x ) = 1 2 u 3 ( t , x 1 , x 2 , 2 2 x 3 ) .

for 1/2 < x 3 < 1. Next, the new field ū, which is divergence-free and parallel to the boundary, but only Lipschitz-continuous, is mollified and generates g ˜ . Of course, the vertical rescaling can be repeated ad infinitum in order to reduce the Action. This will generate infinitesimally small scales in the vertical direction.

So we can already guess that a good concept of generalized solutions to the SPP, for such data, must be related to the limit of the Euler equations under vertical rescaling, namely, the so-called hydrostatic limit of the Euler equations discussed in [29, Chapter 4.6].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874579203800046

Method of Continued Boundary Conditions

Alexander G. Kyurkchan , Nadezhda I. Smirnova , in Mathematical Modeling in Diffraction Theory, 2016

4.1.3 Existence and Uniqueness of the CBCM Integral Equation Solution

For Equations (4.6)–(4.8) , the existence and uniqueness theorems can be proved as in Chapter 2. For example, for Equation (4.6), these theorems are formulated as follows:

1.

Assume that a simple closed surface S satisfies the condition that k is not an eigenvalue of the internal homogeneous Dirichlet problem for a domain inside S. Then Equation (4.6) is solvable if, and only if, S surrounds all singularities of the solution U δ 1 r of the boundary-value problem (4.1), (4.3) , where α = 1 , β = 0 .

2.

If the conditions of Theorem 1 are satisfied, Equation (4.6) has a unique solution.

The proof of these theorems repeats almost word for word the proof given in Chapter 2.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012803728700004X

Lagrange Interpolation and Neville's Algorithm

Ron Goldman , in Pyramid Algorithms, 2003

2.16 Summary

In this chapter you have encountered most of the central ideas of this discipline: existence and uniqueness theorems; dynamic programming procedures; pyramid algorithms; up and down recurrences; basis functions; blends of overlapping data; rational schemes; tensor product, triangular, lofted, and Boolean sum surfaces; along with the use of barycentric coordinates to represent points in the domain of triangular surface patches. These themes will recur in various guises throughout this book. If you have understood everything in this chapter, the rest will be easy!

One core tenet of approximation theory and numerical analysis is that all polynomial bases are not equal. To solve problems in interpolation and approximation, we must use the basis most appropriate to the problem at hand. In this chapter we have seen that the Lagrange basis, and not the standard monomial basis, is most suited both for point interpolation and for polynomial multiplication. We continue with this theme in the next chapter, where we shall study Hermite interpolation—interpolation of both point and derivative data—by invoking special Hermite basis functions.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558603547500039

ASPECTS OF DETERMINISM IN MODERN PHYSICS

John Earman , in Philosophy of Physics, 2007

3.5 Continuity issues

Consider a single particle of mass m moving on the real line in a potential V (x), x ∈ℝ. The standard existence and uniqueness theorems for the initial value problem of odes can be used to show that the Newtonian equation of motion

(11) m x ¨ = F ( x ) : = d V d x

has a locally (in time) unique solution if the force function F (x) satisfies a Lipschitz condition. 31 An example of a potential that violates the Lipschitz condition at the origin is 9 2 | x | 4 / 3 . For the initial data x ( 0 ) = 0 = x ˙ ( 0 ) there are multiple solutions of (12): x (t) ≡ 0, x (t) = t 3, and x (t) = −t 3, where m has been set to unity for convenience. In addition, there are also solutions x (t) where x (t) = 0 for t < k and ±(tk)3 for tk, where k is any positive constant. That such force functions do not turn up in realistic physical situations is an indication that Nature has some respect for determinism. In QM it turns out that Nature can respect determinism while accommodating some of the non-Lipschitz potentials that would wreck Newtonian determinism (see Section 5.2).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444515605500178

Hermite Interpolation and the Extended Neville Algorithm

Ron Goldman , in Pyramid Algorithms, 2003

3.6 Summary

In this chapter we have extended the ideas and techniques from Chapter 2 on Lagrange interpolation of control points to Hermite interpolation of control points and derivatives. Most of the result on Lagrange interpolation including existence and uniqueness theorems, Neville's algorithm, dynamic programming procedures, up and down recurrences, basis functions, rational schemes, and tensor product, lofted, and Boolean sum surfaces extend readily to the Hermite setting. If you understood Chapter 2 well, this chapter will have been mostly a review with some modest extensions.

We mentioned at the end of Chapter 2 that to solve problems in interpolation and approximation, we must use the basis most appropriate to the problem at hand. While the Lagrange and Hermite bases are improvements over the standard monomial basis for performing Lagrange and Hermite interpolation, they are not as efficient computationally as the monomial scheme. In the next chapter we introduce the Newton basis, a basis that is quite suitable for performing interpolation and as efficient computationally as the monomial basis.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558603547500040