Differential Equations, Part II: Second Order Linear Equations

July 24th, 2017


We move onto second order linear differential equations. First, we need a theorem that tells us what kinds of solutions exist for differential equations.

Existence and Uniqueness

We need a suitable theorem guaranteeing uniqueness. The theorem goes just as we expected:

Theorem

Let $p,q,r$ be continuous in an interval $I$. Then the differential equation:

$$ y'' + py' + qy = r $$

With initial conditions $y(t_0) = y_0$ and $y’(t_0)= v_0$, has a unique solution in the interval.

We start with looking at the simplest type of second order linear differential equations. To make things simple, we set the coefficients to be constants, and the right hand side to be zero.

Homogeneous Equations with Constant Coefficients

We have a differential equation of the form:

$$ ay'' + by' + cy = 0 $$

Such an equation where the right hand side is zero is called a homogeneous equation.

We could then we could use a trial solution $y= e^{rt}$, which yields the associated auxiliary equation (or characteristic equation):

$$ ar^2 + br + c = 0 $$

Since $e^{rt} \ne 0$. Solving this yields two solutions.

As an alternative way of writing this, we can call the derivative $D$ with $Df = f’, D^2f = f’’$, etcetera. Then the differential equation becomes:

$$ (aD^2 + bD + c)y =0 $$

And to get a nontrivial solution $y\ne 0$, we solve the same auxiliary equation for $D$.

First note that for a homogeneous linear equation, linear combinations of solutions yield solutions; this creates a vector space structure for the solutions.

This is called the superposition principle: for any two solutions to a homogeneous linear equation, any linear combination of the solutions is also a solution.

In general, for a linear constant coefficient differential equation of order $n$, the space of solutions is dimension $n$. So we only need to construct $n$ linearly independent solutions, which we do as follows:

Linear Independence

As mentioned earlier, the solutions to a linear homogeneous differential equation of order $n$ on an interval $I$ form a vector space of order $n$. So what we’re looking for is a basis of solutions. The above construction gives us $n$ solutions, so we need to check that they are linearly independent; if they are, we have indeed found a basis.

Definition: Let $f_1, f_2, … f_n$ be a set of smooth functions on an interval $I$. We look at the following equation:

$$ c_1f_1 + c_2f_2 + ... + c_nfn = 0 $$

Where $c_i$ are constants. Note that by $0$ here we are denoting the zero function; in other words, this equation should hold for all inputs $x$.

If we can find a solution where at least one $c_i$ is nonzero, we call the functions linearly dependent on the interval; if instead the only solution has each $c_i =0$, the functions are called linearly independent.


If we can find a solution $c_i$, it is clear that the above equation still holds when we take the derivative of both sides (and thus the second derivative, third derivative, etc). Thus, if the set $f_i$ is linearly independent, then we have a non-trivial solution to the following system of equations.

$$ \begin{bmatrix} f_1 && ... && f_n \\ \vdots && \ddots && \vdots \\ f_1^{n-1} && ... && f_n^{n-1} \end{bmatrix} \begin{bmatrix} c_1 \\ \vdots\\ c_n \end{bmatrix} = \begin{bmatrix} 0 \\ \vdots\\ 0 \end{bmatrix} $$

This system of equations has a nontrivial solution if and only if:

$$ \det \begin{bmatrix} f_1 && ... && f_n \\ \vdots && \ddots && \vdots \\ f_1^{n-1} && ... && f_n^{n-1} \end{bmatrix} = 0 $$

Definition: For functions $f_1,…f_n$, the above determinant is called the Wronskian of $f_1,…f_n$, denoted $W(f_1,…f_n)$. The above matrix is called a fundamental matrix.

Now, we can leverage our knowledge of linear algebra to state the following theorem.

Theorem

Let $f_1,…f_n$ are solutions to an order $n$ linear homogeneous differential equation in an interval. Then, $f_1,…f_n$ are linearly independent if and only if the Wronskian $W(f_1,…,f_n)$ is nonzero somewhere in the interval.

Reduction of Order

So how exactly did we come to the above construction of solutions for repeated roots in a linear constant coefficient homogeneous equation? The answer lies in the method known as reduction of order.

Suppose we have a linear differential equation of the form:

$$ y'' + p(t)y'' + q(t)y = r(t) $$

With $p,q,r$ continuous and $r$ possibly zero. And suppose that we know some solution $y_1$ (with continuous derivative) which is never zero in an interval. Then, we can try a solution of the form $y_2=vy_1$ for some twice differentiable function $v$. Substitution and some algebra yields:

$$ v'' + \left( \frac{2y_1'}{y_1}+p \right)v' = \frac{r}{y_1} $$

Since this is a linear first order differential equation and $y_1’, p, r$ are assumed to be continuous with $y_1$ nonzero, we know there is a unique solution in our interval.

Note that though we are essentially dealing with a first order differential equation for $v’$, when we integrate $v’$ we obtain an integration constant. We can WLOG write $v = \tilde{v}+c$, and thus write:

$$y_2 = \tilde{v}y_1 + cy_1$$

As we expected, we end up with two integration constants: the first comes from solving the above differential equation for $v’$; the second comes from integrating $v$. With two initial conditions, then, we can solve for these two constants uniquely.

Thus, we can use reduction of order to solve a general (homogeneous or non-homogeneous) 2nd order linear differential equation where we know one solution.

Example

Take a constant coefficient differential equation with repeated roots, such as $(D-r)^2y = (D^2 - 2Dr + r^2)y = 0$. We already have a solution of the form $e^{rt}$, so we set:

$$ y = ve^{rt} $$

We know that $y_1 = e^{rt}, y_1’ = re^{rt}$ Our equation for $v$ becomes:

$$\begin{aligned} v'' + \left( \frac{2y_1'}{y_1}+ 2p \right)v' &= \frac{r}{y_1} \\ v'' &= 0 \\ \end{aligned}$$

Thus, we have $v = at +b$, for some real coefficients $a,b$. Our linearly independent solution to the differential equation is then:

$$\begin{aligned} y = vy_1 = ate^{rt} + be^{rt} \end{aligned}$$

And for simplicity sake, we can set $a=1, b=0$ and still have a linearly independent pair.

General Solutions to Non-Homogeneous Linear Differential Equations

Suppose we are working with a second order linear equation:

$$ y'' + p(t)y'' + q(t)y = r(t) $$

And suppose we have two solutions to the above equation, $y_1$ and $y_2$. Then, we know the following:

$$\begin{aligned} y_1'' + p(t)y_1'' + q(t)y_1 &= r(t) \\ y_2'' + p(t)y_2'' + q(t)y_2 &= r(t) \\ (y_1-y_2)'' + p(t)(y_2-y_1)' +q(t)(y_2-y_1) &= 0 \end{aligned} $$

So, $y_1-y_2$ is a solution to the associated homogeneous equation! This is useful for the following theorem that allows us to solve non-homogeneous linear differential equations.

Theorem

Suppose we have have a linear, non-homogeneous differential equation. Then the general solution can be written:

$$\begin{aligned} y(t) = y_c + y_p \end{aligned} $$

Where $y_c$ is a solution of the associated homogeneous equation, called the complementary solution, and $y_p$ is one particular solution of the homogeneous equation, called the particular solution.

From now on, we will call a linearly independent set of solutions $y_1,…,y_n$ to a homogeneous equation, a set of fundamental solutions (which, accordingly, fill out the first row in a fundamental matrix). We can then rewrite the above solution:

$$\begin{aligned} y(t) = \sum\limits_{i=1}^n c_iy_i + y_p \end{aligned} $$

Where the coefficients $c_i$ can be solved for if we are given $n$ initial conditions.


This theorem gives us a method of solving a general non-homogeneous linear differential equation:

The Method of Undetermined Coefficients

We first covered the case where we are working with a linear constant coefficient homogeneous differential equation. Using the previous section, we know that a general solution to a non-homogeneous linear equation can be written as $y=y_c+y_p$. Let’s take a look at a special case where we can solve for such a solution easily.

Using the technique from earlier of writing first derivatives as $D$, second derivatives as $D^2$, etcetera, we can indeed write a linear differential operator, or a polynomial function of $D$. For example:

$$\begin{aligned} y'' + y &= 0 \\ (D^2 + 1)y &= 0 \\ P(D)y &= 0 \end{aligned} $$

Where $P(D) = D^2 + 1$ is a polynomial in $D$.

Conditions

Suppose we are working with a non-homogeneous differential equation of the form:

$$\begin{aligned} P(D)y = F(t) \end{aligned} $$

And further suppose that $F(t)$ is a solution of some linear homogeneous constant coefficient equation $A(D)y = 0$. We call $A(D)$ an annihiliator of $F(t)$.

Because we know what these kinds of solutions look like, that means that $F(t)$ could be:

In this case, we can “multiply” both sides by $A(D)$ to get a higher order homogeneous equation:

$$\begin{aligned} A(D)P(D)y = A(D)F(t) = 0 \end{aligned} $$

And we know exactly how to solve this kind of equation. Note that if $y$ is a solution to $P(D)y = 0$, then $y$ is a solution to $A(D)P(D)y = 0$. This fact will become important in just a second.

Finding a Solution

From our theorem, we know that the solutions to $P(D)y=F(t)$ are of the form $y=y_c +y_p$, where $P(D)y_c = 0$. So the first thing we do to solve our equation is find the complementary solutions $y_c$ using above methods.

Next, we solve $A(D)P(D)y = 0$ to obtain a set of solutions $c_1y+1 + … + c_ky_k$. As we noted before, every solution of our target non-homogeneous differential equation is also a solution to the equation $A(D)P(D)y=0$, and is thus of the form $c_1y_1 + … + c_ky_k$. Furthermore, from computing the complementary solutions we know that $y_c = c_1y_1 + … + c_ny_n$ for some $n < k$. Thus, we know that $y_p = y - y_c$ is of the form $c_{n+1}y_{n+1} + … + c_ky_k$.

Finally, substituting into the equation $P(D)y_p = F(t)$, we can solve for these coefficients; hence the name, the method of undetermined coefficients. Once we have $y_p$, we can write our general solution $y=y_c+y_p$.

To summarize the method:

Summary

With these tools, we can do the following for linear differential equations:

In the next section, we tackle the problem of finding solutions to arbitrary non-homogeneous equations of any order.

Download this Page as a PDF:

Note: Images are replaced by captions.

Link to PDF


Recent Posts