4 Second order Linear Differential Equations
4.1 The general theory of second order Linear Differential Equations
The first part of this course deals with second order linear differential equations with constant coefficients. However before we zero in on solution techniques for these equations it is worthwhile to take a brief look at the more general theory.
Many physical problems can be cast as second order differential equations of the form \(\ddot{x} = f(t,x,\dot{x})\) with \(\ddot{x}\) interpreted as acceleration, \(x\) as position, and \(\dot{x}\) as velocity. These often arise as initial value problems, when combined with initial conditions of the form \(x(0) = x_0\), \(\dot{x}(0) = y_0\).
Some cases appear as boundary value problems with the conditions prescribed at two different values of the independent variable. \(x(0) = a\), \(x(1) = b\).
4.1.1 Differential Equations and Linear Algebra
A second order linear differential equation has the form. \[P(x)\frac{d^2y}{dx^2} + Q(x)\frac{dy}{dx} + R(x)y = G(x), \tag{4.1}\] where the functions \(P\), \(Q\), \(R\) and \(G\) are specified. To motivate these as linear we need to review a few concepts from linear algebra. In particular, we need to learn to view functions as vectors.
One key concept from linear algebra is a linear combination. Suppose \(y_1, y_2, \dots, y_n\) are \(n\) functions defined on some interval \([a,b]\). For example, the set \(1, x, x^2, x^3, x^4, x^5\) with \(0 \le x \le 1\) is a set of 6 functions defined on the interval \([0,1]\). Using the jargon we learned in our first linear algebra course, we can construct any polynomial of degree 5 or lower by taking linear combinations of the 6 basis functions. Specifically, if \(c_0, c_1, \dots, c_5\) be any 6 numbers, then \(y(x) = c_0 + c_1x + \dots + c_5x^5\) is a polynomial. If \(c_5\ne0\), \(y\) is a polynomial of degree 5, otherwise, it has a lower degree. We can do the same constructions for any set of functions. Given a set of functions \(\{y_1, y_2, \dots, y_n\}\) all with the same domain, \([a,b]\), and a set of scalars \(\{c_1, \dots, c_n\}\), the linear combination \(y = c_1y_1 + \dots + c_ny_n\) is a function with the same domain.
From our calculus course, we know that if \(y_1, \dot, y_n\) are all continuous on \([a,b]\), then the linear combination is also continuous. In short, functions are vectors, and all the jargon we learned in linear algebra can be applied to functions.
Definition 4.1 The set of functions \(\{y_1, y_2, \dots, y_n\}\) is said to be linearly independent if the only scalars \(c_1,\dots,c_n\) for which \(c_1y_1 + \dots + c_ny_n = 0\) is \(c_1=c_2=\dots=c_n=0\).
Example 4.1 To show the two functions \(\sin x\) and \(\cos x\) defined for \(0 \le x \le \pi\) are linearly independent, we need to show that \(c_1\sin x + c_2\cos x = 0\) for all \(x\in[0,\pi]\) only if \(c_1=c_2=0\). Since \(\sin 0 = 0\) and \(\cos 0 = 1\), setting \(x=0\) implies \(c_2=0\). Similarly, setting \(x=\pi/2\) implies \(c_1=0\). Hence, the two functions are linearly independent.
So why do we call Equation 4.1 a linear differential equation? It is useful to introduce a shorthand notation for the left hand side of Equation 4.1. In more advanced courses this would be introduced as a linear operator and some fancy theorems would be invoked. An operator is simply a mathematical object that maps functions to other functions in the same way the functions of first year calculus map numbers to other numbers. Suppose \(y\) is twice differentiable, then we can define a new function, \(Ly\), by \[\begin{equation} Ly = P\frac{d^2y}{dx^2} + Q\frac{dy}{dx} + Ry. \end{equation}\] We use this to define the operator \(L\). Simply put, given a known, twice differentiable function \(y\), the operation \(L\) returns a new function which we refer to simply as \(Ly\). The linearity of the operator refers to the fact that \(L\) is a linear transformation. That is, for any two twice-differentiable functions \(u\) and \(v\) and any two real numbers (scalars) \(a\) and \(b\), \(L(au+bv) = aLu + bLv\).
4.1.2 The solution set
A solution to the second order differential equation Equation 4.1, is any twice differentiable function \(y(x)\) that satisfies Equation 4.1. Since we are looking for \(y\) as a function of \(x\), it makes sense to refer to \(x\) as our independent variable and \(y\) as our dependent variable. The linearity of the equation makes some more jargon from linear algebra useful. In particular,
- \(P\), \(Q\) and \(R\) are the coefficients of the equation,
- the equation is homogeneous if \(G=0\),
- the equation is nonhomogeneous if \(G\ne 0\),
- the equation is constant coefficient if \(P\), \(Q\) and \(R\) constants.
The right hand side, \(G\), is typically an applied force or current or something similar. Hence we refer to \(G\) as the forcing function.
There are two theorems that are relevant here.
Theorem 4.1 If \(y_1\) and \(y_2\) are solutions of the homogeneous problem then for any constants \(c_1\) and \(c_2\), the linear combination \(y = c_1y_1 + c_2y_2\) is also a solution of the homogeneous problem.
In the language of linear algebra, any linear combination of two solutions is also a solution. The proof follows almost immediately from substituting the linear combination of the two solutions into the differential equation.
Theorem 4.2 If \(y_1\) and \(y_2\) are two linearly independent solutions to the homogeneous problem and \(y_p\) is any solution to the nonhomogeneous problem, then \(y = c_1y_1 + c_2y_2 + y_p\) is also a solution to the nonhomogeneous problem for any constant \(a\).
Again, the proof is simply a matter of plugging the proposed solution into the differential equation.
Theorem 4.3 If \(y_1\) and \(y_2\) are two linear independent solutions of the homogeneous problem and \(y_p\) is any solution to the nonhomogeneous problem, then every solution of the homogeneous problem can be written in the form \(y = c_1y_1 + c_2y_2 + y_p\) for some real numbers \(c_1\) and \(c_2\).
In short, this theorem states that if we can find two linearly independent solutions, then we know all the solutions to the homogeneous problem, and if we also know any one solution to the nonhomogeneous problem, we know them all. The form of the solution set should look very familiar. It has the same structure as the set of solutions to a system of linear algebraic equations.
The proof of this theorem is beyond the scope of this course. However, we will be able to give a sketch of the proof later in the term.
Many of the theorems from linear algebra apply to the operator \(L\). We refer to \(Ly=0\) as the homogeneous, or complementary problem, and \(Ly = G\) as the nonhomogeneous problem. As with linear systems, if \(y_p\) is a solution to the nonhomogeneous problem and \(y_h\) is any solution to the homogeneous problem, then \(y = cy_h + y_p\) is also a solution to the nonhomogeneous problem, for any constant \(c\). This can be easily verified by direct substitution. Further, if \(y_1\) and \(y_2\) are both solutions to the homogeneous problem, then any function of the form \(y = c_1y_1 + c_2 y_2\) is also a solution to the homogeneous problem. Note the slight difference in the wording of these results from the theorems stated above. When \(L\) is a second order linear differential operator, such as arises from second order linear differential equation, then it can further be shown that the homogeneous problem \(Ly=0\) has a two dimensional solution set. That is, we can always find two linearly independent solutions, \(y_1\) and \(y_2\) and every solution to \(Ly=0\) can be written as a linear combination of these. This follows from a theorem for the existence and uniqueness of solutions to systems of first order differential equations, which will be sketched later in these notes.
Later in the notes we will also encounter the eigenvalue problem \(Ly=\lambda y\), where we are interested in finding values of the constant \(\lambda\) for which solutions to the eigenvalue problem exist. These typically arise in boundary value problems, and the eigenvalues correspond to modes of oscillation in the physical system.
Example 4.2 Consider the mass-spring system of Figure Figure 4.1 with a mass of \(m\) attached to a spring with spring constant \(k\). A differential equation model for the system can be obtained either by energy balance or force balance. The energy of the system is \[\begin{align*} \text{Total Energy} &= \text{Kinetic Energy} + \text{Spring Potential} \\ &= \frac{1}{2}m\dot{x}^2 + \frac{1}{2}kx^2 \end{align*}\] Differentiating and assuming the total energy is conserved (constant) leads to the second order differential equation \(0 = m\dot{x}\ddot{x} + kx\dot{x}\) which simplifies to \(m\ddot{x} + kx = 0.\) Since both \(k\) and \(m\) are positive constants, the solutions to the equation are functions whose second derivatives are negative multiples of themselves: \(\ddot{x} = -\frac{k}{m}x\). These are sines and cosines of the appropriate period: \[x_1(t) = \cos \tfrac{k}{m} t, \qquad x_2(t) = \sin \tfrac{k}{m} t.\] The general solution to the differential equation is any linear combination of these two solutions: \(x(t) = c_1\cos \tfrac{k}{m} t \, + \, c_2 \sin \tfrac{k}{m} t.\)
4.2 Linear Second order differential equations with constant coefficients
4.2.1 Theory and Techniques
Consider the problem
\[ay'' + by' +cy = 0 \tag{4.2}\]
Based on our experience with first order equations (a=0), we look for a solution of the form \(y=e^{rx}\). If we can find two such solutions, we’ve got them all! First, we substitute the ansatz, \(y(x) = e^{rx}\), into Equation 4.1 \[\begin{align*} a\frac{d^2}{dx^2}\left(e^{rx}\right) + b\frac{d}{dx}\left(e^{rx}\right) +ce^{rx} &= 0 \\ ar^2e^{rx} + bre^{rx} +ce^{rx} &= 0 \\ \left(ar^2 + br +c\right)e^{rx} &= 0 \end{align*}\] Factoring out \(e^{rx}\) we see that \(r\) must satisfy \[ar^2 + br +c = 0. \tag{4.3}\] That is, \[r = \frac{-b \pm \sqrt{b^2-4ac}}{2a}.\]
There are three cases to consider.
Case I: two real and distinct roots (\(b^2-4ac>0\))
The general solution has the form \[y = c_1e^{r_1x} + c_2e^{r_2x}\] where \(r_1\) and \(r_2\) are the two real roots.
Case II: one real root (\(b^2 - 4ac = 0\))
Here we only have one independent solution \(y_1 = c_1e^{rx}\) with \(r=-b/(2a)\).
At some point in history, someone stumbled upon a second independent solution, \(y_2 = xe^{rx}\), with \(r=-b/(2a)\). While this it is not at all obvious why this should be a solution and how someone would come up with it, it is easy to verify that is is a solution and that \(y_1\) and \(y_2\) are linearly independent.
We leave it as an exercise to show, by direct substitution, that \(y_2\) solves Equation 4.2. That is, show that \[a\frac{d^2}{dx^2}\left(xe^{rx}\right) + b\frac{d}{dx}\left(xe^{rx}\right) +cxe^{rx} = 0\] provided \(b^2 - 4ac =0\) and \(r=b/(2a)\).
To show the two solutions are linearly independent, consider the combination \[\alpha e^{rx} + \beta xe^{rx} = 0.\]
The key concept with linear independence of functions, is that the equality must hold for all \(x\) in the domain of the functions. In this case, since \(e^{rx} > 0\), it follows that \(\alpha + \beta x = 0\), which holds only for \(x=-\alpha/\beta\). Hence the two functions are linearly independent on any interval of nonzero length, which is all we care about.
In summary, the general solution in case II has the form \[y = (c_1+c_2x)e^{rx} \] with \(r = -b/(2a)\).
Case III: complex roots (\(b^2 - 4ac < 0\))
In this case, we have two complex roots, which we will express as \(r_1 = \alpha - \beta i\) and \(r_2 = \alpha + \beta i\). For those who are comfortable with complex functions, you will be happy to know that the two functions \(e^{r_1 x}\) and \(e^{r_2 x}\) are indeed linearly independent.
The only catch is that we started with a problem involving real variables and real functions and it would be nice to have real solutions. Fortunately, it is possible to show that the real and complex parts of the complex solutions are also linearly independent solutions. Hence, the general solution has the form
\[y(x) = \left( c_1 \cos \beta x + c_2 \sin \beta x \right) e^{\alpha x}\]
4.2.2 Exercises
All of the exercises in Section 7.1 of OpenStax Calculus Volume 3 are good practice.
Exercise 4.1 Use Euler’s formula, \(\exp(i\theta) = \cos \theta + i\sin \theta\), to show that \(\cos \theta = \frac{1}{2}\left(e^{i\theta}+e^{-i\theta}\right)\) and \(\sin \theta = \frac{1}{2i}\left(\exp(i\theta)-\exp(-i\theta)\right)\).
Exercise 4.2 Use the results of Exercise 4.1 and Theorem 4.1 to show that the general solution of Equation 4.2 can be expressed as \(y(t) = c_1e^{\alpha t} \cos \beta t + c_2e^{\alpha t} \sin \beta t\) if the roots of Equation 4.3 are complex.
Exercise 4.3 Find the general solutions of \(y'' - 4 y' + 13 y = 0\).
- Characteristic Equation: \(r^2 - 4r + 13 = 0\)
- Roots: \(r = 2 \pm 3i\) (complex case)
- General solution, \(y(t) = c_1e^{2t}\cos(3t) + c_1e^{2t}\sin(3t)\)
Exercise 4.4 Find the general solutions of \(y'' - 6 y' + 5 y = 0\).
- Characteristic Equation: \(r^2 - 6r + 5 = 0\)
- Roots: \(r \in \{5,1\}\) (two distinct real roots)
- General solution, \(y(t) = c_1e^{5t} + c_1e^{t}\)
Exercise 4.5 Find the general solutions of \(y'' + 4 y' + 4 y = 0\).
- Characteristic Equation: \(r^2 + 4r + 4 = 0\)
- Roots: \(r = -2\) (double root)
- General solution, \(y(t) = c_1e^{-2t} + c_1te^{-2t}\)
Exercise 4.6 Solve the Initial value problem \(2y'' + 5y' - 3y = 0\), with \(y(0) = 4\) and \(y'(0) = 2\).
- Characteristic equation: \(2r^2 + 5r - 3 = 0\)
- Roots: \(r \in \{\frac{1}{2}, -3\}\) (distinct real roots)
- General solution, \(y(t) = c_1e^{\frac{1}{2}t} + c_2e^{-3t}\)
- Apply initial conditions \[\begin{alignat*}{3} y(0) &= &\ c_1\ &+ &\ c_2 &=4\\ y'(0)&= &\ \frac{1}{2}c_1 \ &- &\ 3c_2 &= 2 \end{alignat*}\]
- Solve for constants: \(c_1 = 4\) and \(c_2 = 0\)
- Solution to IVP: \(y(t) = 4e^{\frac{1}{2}t}\)
Exercise 4.7 Solve the Boundary Value problem \[y'' - 6y' + 9y = 0, \qquad y(0) = 3, \qquad y(1) = 0\]
- Characteristic equation: \(r^2 - 6r + 9 = 0\)
- Roots: \(r-3\), (double root)
- General solution, \(y(t) = c_1e^{3t} + c_2te^{3t}\)
- Apply initial conditions \[\begin{alignat*}{3} y(0)&= &\ c_1\ & & &=3 \\ y(1)&= &\ e^{3}c_1 \ &+ &\ e^{3}c_2 &= 0 \end{alignat*}\]
- Solve for constants: \(c_1 = 3\) and \(c_2 = -3\)
- Solution to BVP: \(y(t) = 3e^{3t} - 3te^{3t}\)
4.3 Finding particular solutions of the nonhomogeneous problem
There are two textbook approaches to finding solutions to a nonhomogeneous linear problem. The first, variation of parameters, gives us a general formula for the solution that works in the constant coefficient case, and, with caveats, in the nonconstant coefficient case. The second approach, undetermined coefficients, is essentially a trick for the constant coefficient problem that works for certain forcing functions but not in general.
4.3.1 Variation of Parameters
Consider the general linear second order differential equation, \[a(x)y'' + b(x)y' + c(x)y = g(x) \tag{4.4}\] The formulae we derive in this section allow \(a\), \(b\), and \(c\) to depend on \(x\), but we’ll drop the explicit dependence in the derivations for the sake of clarity.
Before we derive the general formula, it’s worth looking at how we can use a solution to the homogeneous problem to simplify the nonhomogeneous problem.
Suppose we know one solution, \(y_1\), to the homogeneous problem (\(g=0\)). We express the general solution as a product of \(y_1\) and another function, say \(u\), and derive the ODE that \(u\) satisfies. I.e., substitute \(y = uy_1\) into our nonhomogeneous problem. \[\begin{align*}
a \left(u''y_1 + 2u'y_1'+uy_1''\right) + b\left(u'y_1+uy_1'\right) + cuy_1 &= g \\
a \left(u''y_1 + 2u'y_1'\right) + bu'y_1 + auy_1''+buy_1' +cuy_1 &= g \\
a \left(u''y_1 + 2u'y_1'\right) + bu'y_1 = g
\end{align*}\] On that last step, we used the fact that \(y_1\) is a solution to the homogeneous problem. The last few terms cancel leaving us with an equation that only involves the derivatives of \(u\). Although this is still a second order equation for \(u\), we can view it as a first order equation for \(u'\). To make this easier to see, gather up the coefficients of \(u''\) and \(u'\) and let \(U = u'\). \[ay_1 U' + \left(2ay_1' + by_1\right)U = g\] We can solve this first order linear equation for \(U\), integrate \(U\) to find \(u\), and multiply by \(y_1\) to find a solution to the original nonhomogeneous second order problem for \(y\).
Now suppose we know two linearly independent solutions, \(y_1\) and \(y_2\), to the homogeneous problem Equation 4.4 with \(g=0\). We can use the same idea to find a particular solution to the nonhomogeneous problem Equation 4.4. The trick is to look for a solution of the form \(y(x) = u(x)y_1(x) + v(x)y_2(x)\). Notice that \(y' = u'y_1 + uy'_1 + v'y_2 + vy'_2\). It turns out looking for two functions, \(u\) and \(v\), gives us enough freedom to choose these functions so that \(u'y_1 + v'y_2 = 0\). This simplifies \(y'\) to \(uy'_1 + vy'_2\), and then \(y'' = u'y'_1 + uy_1^{''} + v'y'_2 + vy_2^{''}\). Substituting these into our general ODE yields \[a \left(u'y'_1 + uy_1^{''} + v'y'_2 + vy_2^{''}\right) + b\left(uy'_1 + vy'_2\right) + c\left(uy_1 + vy_2\right) = g\] This looks nasty, but recognizing that \(y_1\) and \(y_2\) satisfy the ODE, it reduces to \[au'y'_1 + av'y'_2 = g\] Thus, we’ve converted our second order problem for \(y\) into a pair of first order problems for \(u'\) and \(v'\). In matrix form, the pair of equations, and their solution have a nice structure. \[\begin{equation} \begin{pmatrix} y_1 & y_2 \\ y_1' & y_2' \end{pmatrix} \begin{pmatrix} u' \\ v' \end{pmatrix} =\begin{pmatrix} 0 \\ \frac{g}{a} \end{pmatrix} \end{equation}\] \[\begin{equation} \begin{pmatrix} u' \\ v' \end{pmatrix} =\frac{1}{a} \begin{pmatrix} y_1 & y_2 \\ y_1' & y_2' \end{pmatrix}^{-1} \begin{pmatrix} 0 \\ g \end{pmatrix} \end{equation}\] \[\begin{equation} \begin{pmatrix} u' \\ v' \end{pmatrix} =\frac{1}{a(y_1y_2' - y_1'y_2} \begin{pmatrix} y_2' & -y_2 \\ -y_1' & y_1 \end{pmatrix}^{-1} \begin{pmatrix} 0 \\ g \end{pmatrix} \end{equation}\] On the right hand side we have known functions of \(x\). The three functions \(a\), \(y_1\), and \(y_2\) represent our system, and the fourth function, \(g\), usually represents something like an external forcing or applied voltage. We’ll see this matrix form again when we study systems of first order equations.
The determinant, \(y_1y_2' - y_1'y_2\), is known as the Wronskian. Since \(y_1\) and \(y_2\) were linearly independent, we expect this to be nonzero. This isn’t always the case, but it is likely true for most practical cases you’ll encounter.
We can express \(u\) and \(v\) as integrals and express the solution \(y\) as \[y = -y_1\int \frac{y_2g}{a(y_1y_2' - y_1'y_2)} + y_2\int \frac{y_1g}{a(y_1y_2' - y_1'y_2)} \tag{4.5}\] We’ve use the two solutions to the homogeneous problem to reduce our nonhomogeneous problem to two integrations of known functions.
Example 4.3 Use the method of variation of parameters to find the general solution to \(y'' + 9y = \sec 3x\).
First, solve the homogeneous problem. The characteristic equation is \(r^2+9=0\), which has complex roots, \(r=\pm 3i\). The two solutions are \(y_1(x) = \cos(3x)\) and \(y_2(x) = \sin(3x)\).
Second, compute \(y_1'\) and \(y_2'\) and set up the system for \(u'\) and \(v'\) \[\begin{align*} u'\cos 3x + v' \sin(3x) &= 0 \\ -3u'\sin 3x + 3v' \cos(3x) &= \sec 3x \end{align*}\] Isolate \(u'\) by multiplying the first equation through by \(3\cos(3x)\), the second through by \(\sin(3x)\) and taking the difference of the results \[u' = -\frac{1}{3}\sin(3x) \sec(3x)\] Similarly, we find \[v' = -\frac{1}{3}\cos(3x) \sec(3x) = -\frac{1}{3}\] Integrate to find \(u\) \[\begin{align*} u(x) &= -\frac{1}{3}\int \sin(3x) \sec(3x) \, dx \\ &= -\frac{1}{3}\int \tan(3x) \, dx \\ &= \frac{1}{9}\log \| \cos 3x \| \end{align*}\] and \(v = \frac{x}{3}\)
Putting everything together,
\[y(x) = c_1\cos(3x) + c_2\sin(3x) + \frac{1}{9}\cos(3x)\log \| \cos(3x) \| + \frac{1}{3}x\sin(3x)\]
4.3.2 The method of Undetermined Coefficients
Table 7.2 of Openstax Calculus volume 3 lists the main forms of forcing functions amenable to the method. Be sure to also understand the problem solving strategy outlined below that table.
Table 5.1 on page 132 of Kalbaugh’s text gives the same information and further examples.
4.3.3 Exercises
All the problems in Openstax Calculus Volume 3 Section 7.2 are good practice.
4.4 Initial and boundary value problems
The general form of the solution has two arbitrary constants. Thus it is a two-parameter family of solutions. In practice, we are often interested in a solution satisfying additional constraints. These are typically posed as either initial conditions or boundary conditions. Initial conditions are given as constraints of the form \(y(0) = y_1\) and \(y'(0) = y_2\) for some specified constants \(y_1\) and \(y_2\). Boundary conditions are typically given as \(y(0) = y_0\) and \(y(1) = y_1\), but could involve derivatives of \(y\) at the boundaries.
Example 4.4 Find a solution to \(y'' + 2y' - 3 = 2x\) satisfying the initial conditions \(y(0) = 0\) and \(y'(0) = 1\).
4.5 Summary
To find two linearly independent solutions to a homogeneous second order linear differential equation with constant coefficients, \(ay'' + by' + cy' = 0,\) first compute the characteristic roots \(r = \frac{-b \pm \sqrt{b^2-4ac}}{2a}.\) second, determine which of the following three cases holds and write down the appropriate solutions: Two distinct real roots (\(b^2 > 4ac\)): \(y_1 = e^{r_1t} \qquad y_2 = e^{r_2t}\) with \(r_1 = \frac{-b - \sqrt{b^2-4ac}}{2a}\), and \(r_2 = \frac{-b + \sqrt{b^2-4ac}}{2a}\). A double root (\(b^2 = 4ac\)): \(y_1 = e^{rt} \qquad y_2 = te^{rt}\) with \(r = \frac{-b}{2a}\).Complex roots (\(b^2 < 4ac\)): \(y_1 = e^{\alpha t} \cos \beta t \qquad y_2 = e^{\alpha t} \sin \beta t,\) with \(\alpha = -\frac{b}{2a}\), and \(\beta = \frac{\sqrt{4ac-b^2}}{2a}\).
4.6 Discussion
Note that we refer to Equation 4.1 as linear when it is linear in the dependent variable. It may be nonlinear in the independent variable. For example,
applying Kirchoff’s law to a simple electric circuit yields the model \[LQ''(t) + RQ'(t) + \frac{1}{C}Q(t) = E(t)\] where the dependent variable, \(Q\), is the charge, and is assumed to depend on the time, \(t\). The parameters \(L\), \(R\) and \(C\) are the inductance, resistance and capacitance of the circuit and \(E\) is an applied voltage. If the fixed resistance \(R\) is replaced by a variable resistance, \(R(t)\), the equation is still linear in the dependent variable.
More generally, a differential equation is an equation involving a function and some of its derivatives.
For example, a simple model for the angular deflection \(\theta(t)\) of a pendulum of length \(l\) is \[l\theta''(t) + g\sin \theta(t) = 0,\] which arises from balancing kinetic and potential energies. This equation is nonlinear in the dependent variable \(\theta\). For small angles, we can approximate \(\sin(\theta)\) by \(\theta\) leading to the linear second order constant coefficient model \[l\theta''(t) + g\theta(t) = 0.\] Most systems are nonlinear. However most analyses begin with the study of linear approximations. Hence the importance of studying linear differential equations.
4.7 Theoretical Considerations
A good treatment of the existence and uniqueness of solutions to differential equations can be found in Hartman. This material is approachable, but requires a deeper exposure to linear algebra than you received in Math 1503.
Consider the general Second order linear ODE of Equation 4.1 with \(G=0\). Let \(w(x) = y'(x)\) and write the single equation as
\[P(x)w'(x) + Q(x)w(x) + R(x)y(x) = 0,\]
which leads to the linear system \[\begin{align} w'(x) &= - \frac{Q(x)}{P(x)}w(x) - \frac{R(x)}{P(x)}y(x), \\ y'(x) &= w(x) \end{align}\] Hartman’s text deals with the more general equation \(\vec{y}' = f(x,\vec{y})\) where \(\vec{y}\) and \(f\) are vector functions. The result for our simpler linear system is that the initial value problem has a unique solution provided \(Q/P\) and \(R/P\) are both continuous.