Taylor series | Essence of calculus, chapter 11

Taylor series | Essence of calculus, chapter 11


When I first learned about Taylor series,
I definitely didn’t appreciate how important they are.
But time and time again they come up in math, physics, and many fields of engineering because
they’re one of the most powerful tools that math has to offer for approximating functions. One of the first times this clicked for me
as a student was not in a calculus class, but in a physics class.
We were studying some problem that had to do with the potential energy of a pendulum,
and for that you need an expression for how high the weight of the pendulum is above its
lowest point, which works out to be proportional to one minus the cosine of the angle between
the pendulum and the vertical. The specifics of the problem we were trying
to solve are beyond the point here, but I’ll just say that this cosine function made the
problem awkward and unwieldy. But by approximating cos(theta) as 1 – theta2/2,
of all things, everything fell into place much more easily.
If you’ve never seen anything like this before, an approximation like that might seem
completely out of left field. If you graph cos(theta) along with this function
1 – theta2/2, they do seem rather close to each other for small angles near 0, but how
would you even think to make this approximation? And how would you find this particular quadratic?
The study of Taylor series is largely about taking non-polynomial functions, and finding
polynomials that approximate them near some input.
The motive is that polynomials tend to be much easier to deal with than other functions:
They’re easier to compute, easier to take derivatives, easier to integrate…they’re
just all around friendly. So let’s look at the function cos(x), and
take a moment to think about how you might find a quadratic approximation near x=0.
That is, among all the polynomials that look c0 + c1x + c2x2 for some choice of the constants
c0, c1 and c2, find the one that most resembles cos(x) near x=0; whose graph kind of spoons
with the graph of cos(x) at that point. Well, first of all, at the input 0 the value
of cos(x) is 1, so if our approximation is going to be any good at all, it should also
equal 1 when you plug in 0. Plugging in 0 just results in whatever c0 is, so we can
set that equal to 1. This leaves us free to choose constant c1
and c2 to make this approximation as good as we can, but nothing we do to them will
change the fact that the polynomial equals 1 at x=0.
It would also be good if our approximation had the same tangent slope as as cos(x) at
this point of interest. Otherwise, the approximation drifts away from the cosine graph even fro
value of x very close to 0. The derivative of cos(x) is -sin(x), and at
x=0 that equals 0, meaning its tangent line is flat.
Working out the derivative of our quadratic, you get c1 + 2c2x. At x=0 that equals whatever
we choose for c1. So this constant c1 controls the derivative of our approximation around
x=0. Setting it equal to 0 ensures that our approximation has the same derivative as cos(x),
and hence the same tangent slope. This leaves us free to change c2, but the
value and slope of our polynomial at x=0 are locked in place to match that of cos(x). The cosine graph curves downward above x=0,
it has a negative second derivative. Or in other words, even though the rate of change
is 0 at that point, the rate of change itself is decreasing around that point.
Specifically, since its derivative is -sin(x) its second derivative is -cos(x), so at x=0
its second derivative is -1. In the same way that we wanted the derivative
of our approximation to match that of cosine, so that their values wouldn’t drift apart
needlessly quickly, making sure that their second derivatives match will ensure that
they curve at the same rate; that the slope of our polynomial doesn’t drift away from
the slope of cos(x) any more quickly than it needs to.
Pulling out that same derivative we had before, then taking its derivative, we see that the
second derivative of this polynomial is exactly 2c2, so to make sure this second derivative
also equals -1 at x=0, 2c2 must equal -1, meaning c2 itself has to be -½.
This gives us the approximation 1 + 0x – ½ x2. To get a feel for how good this is, if you
estimated cos(0.1) with this polynomial, you’d get 0.995. And this is the true value of cos(0.1).
It’s a really good approximation. Take a moment to reflect on what just happened.
You had three degrees of freedom with a quadratic approximation, the constants c0, c1, and c2.
c0 was responsible for making sure that the output of the approximation matches that of
cos(x) at x=0, c1 was in charge of making sure the derivatives match at that point,
and c2 was responsible for making sure the second derivatives match up.
This ensures that the way your approximation changes as you move away from x=0, and the
way that the rate of change itself changes, is as similar as possible to behavior of cos(x),
given the amount of control you have. You could give yourself more control by allowing
more terms in your polynomial, and matching higher order derivatives of cos(x).
For example, add on the term c3x3 for some constant c3.
If you take the third derivative of a cubic polynomial, anything quadratic or smaller
goes to 0. As for that last term, after three iterations
of the power rule it looks like 1*2*3*c3. On the other hand, the third derivative of
cos(x) is sin(x), which equals 0 at x=0, so to make the third derivatives match, the constant
c3 should be 0. In other words, not only is 1 – ½ x2 the
best possible quadratic approximation of cos(x) around x=0, it’s also the best possible
cubic approximation. You can actually make an improvement by adding
a fourth order term, c4x4. The fourth derivative of cos(x) is itself, which equals 1 at x=0.
And what’s the fourth derivative of our polynomial with this new term? Well, when
you keep applying the power rule over and over, with those exponents all hopping down
front, you end up with 1*2*3*4*c4, which is 24c4
So if we want this to match the fourth derivative of cos(x), which is 1, c4 must be 1/24.
And indeed, the polynomial 1 – ½ x2 + 1/24 x4, which looks like this, is a very close
approximation for cos(x) around x=0. In any physics problem involving the cosine
of some small angle, for example, predictions would be almost unnoticeably different if
you substituted this polynomial for cos(x). Now, step back and notice a few things about
this process. First, factorial terms naturally come up in
this process. When you take n derivatives of xn, letting
the power rule just keep cascading, what you’re left with is 1*2*3 and on up to n.
So you don’t simply set the coefficients of the polynomial equal to whatever derivative
value you want, you have to divide by the appropriate factorial to cancel out this effect.
For example, that x4 coefficient is the fourth derivative of cosine, 1, divided by 4 factorial,
24. The second thing to notice is that adding
new terms, like this c4x4, doesn’t mess up what old terms should be, and that’s
important. For example, the second derivative of this
polynomial at x=0 is still equal to 2 times the second coefficient, even after introducing
higher order terms to the polynomial. And it’s because we’re plugging in x=0,
so the second derivative of any higher order terms, which all include an x, will wash away.
The same goes for any other derivative, which is why each derivative of a polynomial at
x=0 is controlled by one and only one coefficient. If instead you were approximating near an
input other than 0, like x=pi, in order to get the same effect you would have to write
your polynomial in terms of powers of (x – pi), or whatever input you’re looking at.
This makes it look notably more complicated, but all it’s doing is making the point pi
look like 0, so that plugging in x=pi will result in a lot of nice cancelation that leaves
only one constant. And finally, on a more philosophical level,
notice how what we’re doing here is essentially taking information about the higher order
derivatives of a function at a single point, and translating it into information about
the value of that function near that point. We can take as many derivatives of cos(x)
as we want, it follows this nice cyclic pattern cos(x), -sin(x), -cos(x), sin(x), and repeat.
So the value of these derivative of x=0 have the cyclic pattern 1, 0, -1, 0, and repeat.
And knowing the values of all those higher-order derivatives is a lot of information about
cos(x), even though it only involved plugging in a single input, x=0.
That information is leveraged to get an approximation around this input by creating a polynomial
whose higher order derivatives, match up with those of cos(x), following this same 1, 0,
-1, 0 cyclic pattern. To do that, make each coefficient of this
polynomial follow this same pattern, but divide each one by the appropriate factorial, like
I mentioned before, so as to cancel out the cascading effects of many power rule applications.
The polynomials you get by stopping this process at any point are called “Taylor polynomials”
for cos(x) around the input x=0. More generally, and hence more abstractly,
if we were dealing with some function other than cosine, you would compute its derivative,
second derivative, and so on, getting as many terms as you’d like, and you’d evaluate
each one at x=0. Then for your polynomial approximation, the
coefficient of each xn term should be the value of the nth derivative of the function
at 0, divided by (n!). This rather abstract formula is something
you’ll likely see in any text or course touching on Taylor polynomials.
And when you see it, think to yourself that the constant term ensures that the value of
the polynomial matches that of f(x) at x=0, the next term ensures that the slope of the
polynomial matches that of the function, the next term ensure the rate at which that slope
changes is the same, and so on, depending on how many terms you want.
The more terms you choose, the closer the approximation, but the tradeoff is that your
polynomial is more complicated. And if you want to approximate near some input
a other than 0, you write the polynomial in terms of (x-a) instead, and evaluate all the
derivatives of f at that input a. This is what Taylor series look like in their
fullest generality. Changing the value of a changes where the approximation is hugging
the original function; where its higher order derivatives will be equal to those of the
original function. One of the simplest meaningful examples is
ex, around the input x=0. Computing its derivatives is nice, since the derivative of ex is itself,
so its second derivative is also ex, as is its third, and so on.
So at the point x=0, these are all 1. This means our polynomial approximation looks like
1 + x + ½ x2 + 1/(3!) x3 + 1/(4!) x4, and so on, depending on how many terms you want.
These are the Taylor polynomials for ex. In the spirit of showing you just how connected
the topics of calculus are, let me turn to a completely different way to understand this
second order term geometrically. It’s related to the fundamental theorem of calculus, which
I talked about in chapters 1 and 8. Like we did in those videos, consider a function
that gives the area under some graph between a fixed left point and a variable right point.
What we’re going to do is think about how to approximate this area function, not the
function for the graph like we were doing before. Focusing on that area is what will
make the second order term pop out. Remember, the fundamental theorem of calculus
is that this graph itself represents the derivative of the area function, and as a reminder it’s
because a slight nudge dx to the right bound on the area gives a new bit of area approximately
equal to the height of the graph times dx, in a way that’s increasingly accurate for
smaller choice of dx. So df over dx, the change in area divided
by that nudge dx, approaches the height of the graph as dx approaches 0.
But if you wanted to be more accurate about the change to the area given some change to
x that isn’t mean to approach 0, you would take into account this portion right here,
which is approximately a triangle. Let’s call the starting input a, and the
nudged input above it x, so that this change is (x-a).
The base of that little triangle is that change (x-a), and its height is the slope of the
graph times (x-a). Since this graph is the derivative of the area function, that slope
is the second derivative of the area function, evaluated at the input a.
So the area of that triangle, ½ base times height, is one half times the second derivative
of the area function, evaluated at a, multiplied by (x-a)2.
And this is exactly what you see with Taylor polynomials. If you knew the various derivative
information about the area function at the point a, you would approximate this area at
x to be the area up to a, f(a), plus the area of this rectangle, which is the first derivative
times (x-a), plus the area of this triangle, which is ½ (the second derivative) * (x – a)2.
I like this, because even though it looks a bit messy all written out, each term has
a clear meaning you can point to on the diagram. We could call it an end here, and you’d
have you’d have a phenomenally useful tool for approximations with these Taylor polynomials.
But if you’re thinking like a mathematician, one question you might ask is if it makes
sense to never stop, and add up infinitely many terms.
In math, an infinite sum is called a “series”, so even though one of the approximations with
finitely many terms is called a “Taylor polynomial” for your function, adding all
infinitely many terms gives what’s called a “Taylor series”.
Now you have to be careful with the idea of an infinite series, because it doesn’t actually
make sense to add infinitely many things; you can only hit the plus button on the calculator
so many times. But if you have a series where adding more
and more terms gets you increasingly close to some specific value, you say the series
converges to that value. Or, if you’re comfortable extending the definition of equality to include
this kind of series convergence, you’d say the series as a whole, this infinite sum,
equals the value it converges to. For example, look at the Taylor polynomials
for ex, and plug in some input like x=1. As you add more and more polynomial terms,
the total sum gets closer and closer to the value e, so we say that the infinite series
converges to the number e. Or, what’s saying the same thing, that it equals the number
e. In fact, it turns out that if you plug in
any other value of x, like x=2, and look at the value of higher and higher order Taylor
polynomials at this value, they will converge towards ex, in this case e2.
This is true for any input, no matter how far away from 0 it is, even though these Taylor
polynomials are constructed only from derivative information gathered at the input 0.
In a case like this, we say ex equals its Taylor series at all inputs x, which is kind
of a magical thing to have happen. Although this is also true for some other
important functions, like sine and cosine, sometimes these series only converge within
a certain range around the input whose derivative information you’re using.
If you work out the Taylor series for the natural log of x around the input x=1, which
is built from evaluating the higher order derivatives of ln(x) at x=1, this is what
it looks like. When you plug in an input between 0 and 2,
adding more and more terms of this series will indeed get you closer and closer to the
natural log of that input. But outside that range, even by just a bit,
the series fails to approach anything. As you add more and more terms the sum bounces
back and forth wildly, it does not approaching the natural log of that value, even though
the natural log of x is perfectly well defined for inputs above 2.
In some sense, the derivative information of ln(x) at x=1 doesn’t propagate out that
far. In a case like this, where adding more terms
of the series doesn’t approach anything, you say the series diverges.
And that maximum distance between the input you’re approximating near, and points where
the outputs of these polynomials actually do converge, is called the “radius of convergence”
for the Taylor series. There remains more to learn about Taylor series,
their many use cases, tactics for placing bounds on the error of these approximations,
tests for understanding when these series do and don’t converge.
For that matter there remains more to learn about calculus as a whole, and the countless
topics not touched by this series. The goal with these videos is to give you
the fundamental intuitions that make you feel confident and efficient learning more on your
own, and potentially even rediscovering more of the topic for yourself.
In the case of Taylor series, the fundamental intuition to keep in mind as you explore more
is that they translate derivative information at a single point to approximation information
around that point. The next series like this will be on probability,
and if you want early access as those videos are made, you know where to go.

100 thoughts on “Taylor series | Essence of calculus, chapter 11

  1. This made understood my college math course but also love math for what they are. The study by numbers of the world around us. It is even more fulfilling and addictive than a a TV series. Thanks man!

  2. MIND = BLOWN , i cant explain my happiness right now , 3 years of frustration with taylor and laurent series !!!!!!!!!!!!!!!!!!
    i always knew i lacked the intuition behind the purpose of these series , i knew how to derive and everything else , but the intuition part just makes it a 100 times better for me to appreciate these important concepts!!!

  3. This videos are so great, awesome, fantastic… I don’t even know how to describe. If possible, please keep making them and in large quantities hahah. Had to become a patreon for the sake of my engineering degree.

  4. His are #thebest math videos I've ever watched
    I'm kinda noob in mathematics but i must say there isn't a video of his which I've not understood…..
    He makes the best use of the animations which most ppl on YouTube lack

    Congrats Man
    Keep up the good work

  5. Taylor series is the tired frequency issue I have with life and sickness reenactments are hurting more for no reason.

  6. Thank you, you saved me in time for my calculus exam. When you explain it it's way more interesting then when my professor explains it. I have trouble concentrating, especially when a subject is presented very dryly, and you managed to present it in a much more compelling way.

  7. "The first time this clicked for me was in a physics class, not a mathematics class."
    As an engineering graduate I can say that almost all math clicked for me in physics or engineering classes. Complex numbers clicked for me when studying control theory. Differential equations clicked for me when studying vibrations, etc. Math teachers could take that message home.

  8. I didn't like the way my calculus teacher explained this concept in a rush before finals week. I didn't understand how or why the series worked but now I see how beautiful it actually is abstractly. Hopefully I can understand it better before finals, and regardless of how I do, I'm glad that I now know this much.

  9. what is a geometric representation on the f'(x) or f''(x) for the forth term of taylor polynomial: (f(a)(x-a)^3)/6)

  10. I really hope the graphics are as soothing as the audio because i don't want anyone who is deaf to skip this video
    Honestly I understand tailor series from this one video faster than I've ever learnt anything at school
    and honestly this is just me watching with nothing else to do just chilling
    for expanding my brain you get a fat like and I wish you a merry Christmas
    I'm gonna look so smart at uni interviews

  11. This is some of the …. delete that … this is THE MOST SOPHISTICATED Video on Taylor-Series that i have ever seen!

    Its kinda perfect for me. Thank you very much for uploading! +sub!

  12. getting flashbacks to the taylor polynomial but never made the connection to why the factorial was used in the denominator. doing that to cancel out the derivative coefficient makes perfect sense now. thank you.

  13. Because it's fundamental, the meaning of the Calculus Therom is congruent in Principle with logarithmic e-Pi-i numberness, the processes of calculations are functional interceptions = interference as demonstrated graphically, in which the "i" /area, holographic transverse dimensionality is the "hidden variability of time duration timing modulation interference positioning.

    Of course that is not visible, it has to be inferred mathematically? And a fact that is "readable" memory association in Principle, Functional In-form-ation.

    Which means, via entangled superposition logic of quantization of e-Pi-i resonance imaging, positioning is graphical perception by image-inary brain activity in sensory experience. Stating the obvious.., still learning.., still amazed at this level of understandable presentation.

    Probabilisticly differentiating between the format and con-formal function is the e-Pi-i multi-phase resonance imaging condensation of the Singularity positioning here-now forever Hologram in the omnidirectional-dimensional Origin, that corresponds to, is equivalent to P=nP by e-Pi-i Polar-Cartesian AM-FM Timing-spacing coordination such as the Taylor Series, specifically.

  14. How do you make these videos? I mean both mathematical formulations and animation together.
    Thank you for providing those awesome understandings.

  15. these thumb nail shows early life antennae of small creatures
    🐛🐜🐲🐝🐲🐞🐙🌾🍃🍁🌙🎑🌋🌋🌋🌋🎳🎉🎇🎆👿👾💦💦💦💦💦💦

  16. Is there a higher dimension version of taylor series? Which gives higher order polynomials to solve probabilistic density of wav functions?

  17. just finished calc 2, and my professor neglected to show us how important calculating the interval of convergence is for a taylor series. literally the entire 4th quarter of the semester just came together for me. thanks for this!

  18. Taylor Series seem way less intimidating when I think of them as a cheap hack to avoid using a more annoying function. Like using only 6 digits of pi in a program and calling it close enough.

  19. Just thinking how mathematicians used to think all these, we not all these extraordinary animation to just pick up the superficial part of it, truly they were marvelous.

  20. Math Question because I cant do it.
    0 times infinity (As in an infinite sum of 0+0+0+0+0…..)
    Any number times 0=0
    Any number times infinity = infinity

  21. but what if I take ln x instead of cos(x) in the context of the beginning of the video?
    ln x is nice when x = 1 => ln 1 = 0
    so P(x) = c0 + c1x + c2x^2
    at x=1 P(1) = c0 + c1 + c2 which should equal 0.
    then we get c0 + c1 + c2 = 0 => c0 = -c1 – c2, what should I do??

  22. Better explanation than at the university. Former student of Faculty of Mathematics and Physics at Charles University, Prague speaking.

  23. why is the height of the triangle (slope)(x-a) in 16:04 since when is the height of any triangle the hypotneuse multiplied by the base

  24. You are a genius and math is beautiful.

    I get where the 1/n! comes from now. It's the degrees of differentiation! Amazing. I like not just learning the rules of math. Heck, I wouldn't study it as a 37 year old if it were just about that. It's about insights and revelations, beauty itself and power. It's like studying the force. When you see how things click or where different things come from, it's amazing.

    Trig is pretty old. The ancients studied it deeply. But calc is pretty new. I guess N&L depended on Cartesian coordinate system to develop it. Then with calc and differentiation, more powers open up to us in a series (pun intended).

    Nice!

  25. Loved the formal way you found that cos function
    But i was thinking it in a way like : cos(theta)= 1-2sin²(theta/2) , and as we consider theta to be very small , we can replace sin(theta) by theta itself . But the reason for that mere replacement at the end was just brilliantly explained by you !

  26. We have just started learning about the Taylor series at university, and I understood that it's useful for approximations and could use it for calculations, which is pretty easy since you have the formula. But I was fascinated at the fact that any function could be expressed this way, kept wondering where the factorial, derivatives, power come from, how is it that we can express certain functions that way for values of x that are far from the input we're approximating around etc… i surely could use it but i didn't totally and clearly get the intuition behind it. And I am just extremely grateful to have found the answers to exactly those single questions, put in such an incredibly clear and intuitive way. What made me really happy is when you said that it's "magical" that the exponential function is equal to the series for all inputs, because i feel like my fascination is justified and that it's actually not obvious, like it is generally presented.

  27. Sir. Thank you infinity number of times. The way, the beauty of this video wow. 22 min is just knowledge showered on us .. Thank you sir thank you

  28. It's incredible how Grant approaches and motivates these topics. I always learn something new watching them. And by learn I mean really internalise a particular concept. He's got an amazing ability to teach and is a genuine treasure.

  29. I think you contributed the quote at the beginning of the video to the wrong person; the quote belongs to sir Lil Pump.

  30. Oh my goodness….first time I understood what Taylor series is even though I did engineering where I used Taylor series without understanding it! As I was going through the video I was thinking if it was possible to create a Taylor series approximation for a stock price to predict future prices…then the “radius of Convergence” put cold water on my dream….no way to get rich quickly…..I will be happy knowing I am richer in knowledge after watching this video….Kudos to you and wish I was 30 years younger and doing math all over again. If students don’t watch your videos, they are surely missing all the beauty of mathematics!

  31. For those confused over just what the heck the “second derivative” and the “fourth derivative” and all those numbered derivatives were supposed to be, if you take the derivative of the derivative of the function, you have the second derivative, and taking the derivative of the derivative of the second derivative gives the 4th derivative.

  32. This is exactly what I asked a couple of Calculus teachers once: and they only said, you can add a series of curves, so it's possible for certain not very complicated curves to have a Taylor series; after 25 years, I have my answer then… thank you.
    So maybe, the "Irrational" numbers's digits are a fractal Taylor series, then it'd be fair to call them Mandelbrot-Taylor series/numbers

  33. 20:28
    When you say that the derivative information doesn't propagate beyond a 'distance' of 1, isn't that a consequence of the fact that ln(x) is undefined for nonpositive numbers?
    So your approximation is forbidden to spread into undefined territory, and this affects both the upper and lower bound of this interval
    Or does this just happen to be a nicely symmetric example?

  34. Perhaps this is an ignorant question that someone can answer. With Taylor Polynomials can we completely appx the function? Like not strictly local to "a" (whatever "a' we chose), but accurate for any input?

Leave a Reply

Your email address will not be published. Required fields are marked *