# Taylor Series as Approximations

## Taylor Series Generalize Tangent Lines as Approximation

Rather than stop at a linear function as an approximation, we let the degree of our approximation increase (provided the necessary derivatives exist), until we have an approximation of the form

$\left.T_n(x)=f(a)+f^\prime(a)(x-a)+\frac{f^{(2)}(a)}{2!}(x-a)^2+\ldots+\frac{f^{(n)}(a)}{n!}(x-a)^n\right.$

This polynomial will achieve perfect agreement with the function $\left.f\right.$ at $\left.x=a\right.$ in all derivatives up to the $\left.n^{th}\right.$ (including the $\left.0^{th}\right.$ -- that is, the function itself):

$\left. \begin{cases} T_n(a)=f(a)\\ T_n^\prime(a)=f^\prime(a)\\ \vdots\\ T_n^{(n)}(a)=f^{(n)}(a) \end{cases} \right.$

## Taylor Series Error

### The form of the Taylor Series Error

By stopping the process, we introduce an error:

$\left.\left.f(x)=T_n(x)+R_n(x)\right.\right.$

Taylor's Theorem asserts that

$\left.R_n(x)=\frac{1}{n!}\int_a^x(x-u)^nf^{(n+1)}(u)du\right.$

We can show this using the first principle of mathematical induction. First of all, we need to assume that the function $\left.\left.f\right.\right.$ is differentiable a sufficient number of times. We'll start by showing that the result holds for $\left.n=0\right.$:

Base Case: $\left.n=0\right.$

$\left.R_0(x)=\frac{1}{0!}\int_a^x(x-u)^0f^{(0+1)}(u)du=\int_a^xf^\prime(u)du=f(x)-f(a)\equiv f(x)- T_0(x)\right.$

Hence the result holds in the base case.

Implication: Assume that the result holds for $\left.n=k\right.$, and shows that it holds for $\left.n=k+1\right.$. Note: we need to assume that the function is $\left.k+2\right.$ times differentiable on the interval of interest, $\left.(a,x)\right.$ or $\left.(x,a)\right.$ (depending on the relative positions of $\left.a\right.$ and $\left.x\right.$).

Consider $\left.R_{k+1}(x)=\frac{1}{(k+1)!}\int_a^x(x-u)^{k+1}f^{(k+2)}(u)du\right.$

We evaluate the integral via integration by parts. We want to step down the power on the polynomial term by differentiation, and decrease the derivative term by integration (i.e., from $\left.k+2\right.$ to $\left.k+1\right.$), which dictates our choices of the functions u and v in the integration by parts:

$\left.R_{k+1}(x)=\frac{1}{(k+1)!}\left[ (x-u)^{k+1}f^{(k+1)}(u)|_a^x + (k+1)\int_a^x(x-u)^{k}f^{(k+1)}(u)du \right]\right.$

But we notice that

$\left.R_{k+1}(x)=-\frac{1}{(k+1)!}(x-a)^{k+1}f^{(k+1)}(a) + \left[(f(x)-T_k(x)\right] \equiv f(x)-T_{k+1}(x)\right.$

So the result holds for $\left.k+1\right.$.

Conclusion: The first principle of induction assures us that this result will hold (provided the necessary derivatives exist).

### Alternate Taylor Series Error

The error is related to the $\left.(n+1)^{th}\right.$ derivative:

$\left.R_n(x)=\frac{1}{n!}\int_a^x(x-u)^nf^{(n+1)}(u)du=\frac{f^{(n+1)}(\xi(x))(x-a)^{n+1}}{(n+1)!}\right.$

where $\left.\xi(x) \in [a,x]\right.$.

We can pass from the integral to the form above by invoking the First mean value theorem for integration (see [1], from which the following is borrowed):

The first mean value theorem for integration states

If G : [a, b] → R is a continuous function and φ : [a, b] → R is an integrable positive function, then there exists a number x in (a, b) such that
$\left.\int_a^b G(t)\varphi (t) \, dt=G(x) \int_a^b \varphi (t) \, dt.\right.$

In this case, choose as function G the derivative term; the function φ is the polynomial term (which is easy to integrate): $\left.R_n(x)=f^{(n+1)}(\xi(x))\frac{1}{n!}\int_a^x(x-u)^ndu=f^{(n+1)}(\xi(x))\frac{(x-a)^{n+1}}{(n+1)!}\right.$

By the way, the polynomial term may be positive or negative: what is important is that it holds its sign fixed (we can just factor out a negative sign, then, if necessary).

### Bounding the Taylor Series Error

In particular, if we can bound the error in the derivative term, $\left.f^{(n+1)}(u)\right.$, then we can bound the entire expression on the interval of interest:

$\left.|R_n(x)| \le K\frac{|x-a|^{n+1}}{(n+1)!}\right.$, where $\left.|f^{(n+1)}(u)|\le K\right.$ $\left.\forall u\right.$ between a and x.

This provides us with an error bound, telling us how bad the approximation will be in the worst case.

## Examples

Consider the function $\left.\left.f(x)=e^x\right.\right.$. This is a marvelous function, because it's equal to all of its derivatives. Hence the Taylor series expansion about $\left.x=0\right.$ is very simply computed:

$\left.e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+ \ldots\right.$

Hence, for example

$\left.e^{-1}=1+-1+\frac{1}{2!}+\frac{-1}{3!}+\frac{1}{4!}+ \ldots\right.$

Thus the $\left.n^{th}\right.$ degree Taylor polynomial that agrees best with $\left.f(x)\right.$ expanded about zero is

$\left.e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+ \ldots + \frac{x^n}{n!}\right.$

and the error of approximating $\left.\left.e^{-1}\right.\right.$ using this polynomial is going to be bounded by

$\left.|R_n(-1)|=\frac{1}{(n+1)!}\right.$

Hence, if we should desire an approximation of $\left.\left.e^{-1}\right.\right.$ to within a certain $\left.\left.\epsilon\right.\right.$ of its true value, then we should choose the $\left.n^{th}\right.$ degree polynomial such that

$\left.|R_n(-1)|=\frac{1}{(n+1)!}< \epsilon\right.$

In other words, find the first value of n that makes the inequality true above.

Problem #31, p. 499, Rogawski:

We're trying to approximate the function $\left.\left.f(x)=e^x\right.\right.$ with a Taylor polynomial about 0 (i.e., a Maclaurin polynomial). We know that

$\left.e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+ \ldots +\frac{x^n}{n!}+\frac{e^{\xi(x)}x^{n+1}}{{(n+1)}!} \right.$

so that

$\left.|e^x - \left(1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+ \ldots +\frac{x^n}{n!}\right)| = |\frac{e^{\xi(x)}x^{n+1}}{{(n+1)}!}| \right.$

The claim is that for a given value of $\left.x=c\right.$ the derivative term $\left.e^{\xi(c)}\le{e^{c}}\right.$.

In this problem, $\left.\left.c=.1\right.\right.$, and $\left.\left.\epsilon=10^{-5}\right.\right.$. Find n.

## Application: Muller's Method

One of the most important applications of the linearization $\left.L(x)\right.$ is in root-finding: that is, finding zeros of a non-linear function via Newton's method. Taylor series allow us to generalize this technique. Since we can easily find the roots of a quadratic, why not introduce the quadratic approximation, rather than the linearization? Why not use the "quadraticization"?

Given function $\left.f(x)\right.$, and a guess for a root $\left.x_0\right.$. We would first check to see if $\left.f(x_0)=0\right.$. If it is, we're done; otherwise, we might try to improve the guess. How so?

Use the quadraticization $\left.Q(x)=f(x_0)+f^\prime(x_0)(x-x_0)+\frac{f^{\prime\prime}(x_0)}{2}(x-x_0)^2\right.$, and find a zero of it:

$\left.x=x_0+\frac{-f^\prime(x_0) \pm \sqrt{ (f^\prime(x_0))^2-2f(x_0)f^{\prime\prime}(x_0)}}{f^{\prime\prime}(x_0)}\right.$

This is Muller's method, an iterative scheme for improving the approximation of a root. Notice that there are two roots: how would you choose the root you want?

(By the way: the form of the root given above is not necessarily the best formula to use to compute the root. It is, however, formally the correct value of the root.)

### For example:

• Let $\left.f(x)=\sqrt{x}-x\right.$; then $\left.f^\prime(x)=\frac{1}{2}x^{-1/2}-1\right.$ and $\left.f^{\prime\prime}(x)=\frac{-1}{4}x^{-3/2}\right.$.
• Guess: $\left.x_0=4\right.$ (pretty bad guess! $\left.f(4)=-2\right.$)
• Improved guess: $\left.x_1 \approx 1.1660104885167257\right.$
• Now do it again! After another go, $\left.x_2 \approx 1.0004246508122367\right.$
• Once more: $\left.x_3 \approx 1.0000000000095643\right.$
• Once more: $\left.x_4 \approx 1.0\right.$

It's like a miracle... but it's just mathematics!

Muller's method generally converges faster than Newton's method, and can produce complex zeros (which Newton's method can never do, starting from a real-valued function).