Solution. First, assume the limit exists. This means for every sequence with , we know exists, and its value is independent of the choice of sequence. Now, choose an arbitrary sequence with and consider the limit Since we see (with ) and we already know for all such sequences the limit exists and takes the same value. Thus, since was arbitrary, our new limit always exists, and takes this same value! So exists.
The converse is an identical argument, but now we assume the limit for all , and setting , we conclude the limit of always exists.
Proof. We prove the contrapositive: that if is not monotone, then there must be a point where . Note that its enough to find two points in the domain with : if such a pair exists, then since is continuous on and differentiable on we can apply Rolle’s theorem (or the Mean Value Theorem) to get a point with .
So, we need only show that if a function is not monotone there must exist a pair of points mapped to the same value. Since is assumed to not be monotone, there must be some points where and other points where . Let’s give names to these points how they’re ordered on the number line: is the smallest, then so we have . In this list we must have at least one of being a “peak” (bigger than its neighbors) or a “valley” (smaller than its neighbors): if not, then either is either increasing all the way along this sequence of ’s or decreasing all the way, which leads to a contradiction with our original assumption (though requires a bit more writing to match up with cases and ).
Thus, without loss of generality (perhaps multiplying by -1 if we had a valley) we can assume there are three points where . Now choose any between and . By the continuity of on we can use the intermediate value theorem to find a point in which maps to , and similarly by the intermediate value theorem on we can find a point in this interval which maps to . But now we have two points mapping to the same value, so our original argument applies and Rolles’ theorem furnishes a point in between where the derivative must be zero!
Trigonometric Functions
The majority of this assignment deals with the trigonometric functions starting from their functional equation definition.
Definition 1 (Angle Identities) A pair of two functions are called trigonometric if they are a continuous nonconstant solution to the angle identities
Starting from these two identities, much can be proven. For those of you completing the optional final project on trigonometry, you’ll do some of that there. But for these problems it will be helpful to remember a couple rather immediate corollaries of these that we proved in the text:
Proposition 1 If are trigonometric functions, then
In these problems instead of algebraic identities we consider differential ones
Exercise 4 Let be trigonometric functions. Prove that if they are differentiable at zero (so and exist), that and are differentiable at every on the real line.
Hint: use the angle identites, and the limit definition of the derivative,
Solution. Assume that are differentiable at zero, and let be arbitrary. Then computing from the limit definition, Factoring out from the first term, and using that , we can rewrite this as Using the fact that we know the derivatives at zero exist, we recognize the two limits appearing here! Knowing they exist allows us to apply the limit laws:
Thus, exists, and even better we have a formula for it! A similar calculation with tha angle sum identity for cosine implies that the derivative of cosine also exists.
Exercise 5 Assuming that are differentiable trigonometric functions, prove that and .
Hint: we already know some things about at 0… For , what would happen of , given what you learned in the previous problem?
Proof. We know that for all , so : thus for all , and so for all . Since , we know this must be a maximum value, and by Fermat’s theorem the derivative of a differentiable function at a local max or min is zero. Thus
Now we consider . Assume for the sake of contradiction that . Then using our derivative formula above, we see that for all ,
But as we’ve proven via the Mean Value Theorem, any function whose derivative is everywhere zero is constant, and a trigonometric pair of functions were defined as nonconstant solutions to the angle sum identities. Thus must be nonconstant, and we have a contradiction. This means must be nonzero.
Exercise 6 Let be the derivative at zero. Combine what you’ve learned in the previous problems to show that differentiable trigonometric functions satisfy the identities
Further, prove that if are trigonometric functions so are and for , and use differentiation laws to conclude that can take on any nonzero real value.
Proof. Let be trigonometric and . Then the functions are continuous as they are the composition of continuous functions, and are nonconstant as are nonconstant. Furthermore, they satisfy the trigonometric identities as and similarly for .
Now let , be arbitrary. We want to show that its possible to build a trigonometric pair such such that . Using our original pair we know about, since we can divide by it, and . Consider the trigonometric functions
Using the chain rule,
So at as required.
This tells us that just like exponentials, while the functional equation itself doesn’t pick out a specific function (but rather a whole one parameter family of them) calculus selects a natural choice: where the arbitrary constant that shows up in differentiation is unity!
Definition 2 The sine and cosine are defined by the conditions , and
Exercise 7 Use the definition above to compute the taylor series for sine and cosine, and prove the series converge on the entire real line.
Solution. I’ll give the solution for , the cosine is analogous. The Taylor series is computed by finding hte nth derivatives at zero: Thus, after four cycles, the pattern repeats! Using the fact that we know the values of sine and cosine at zero we see the derivative terms in the taylor series are periodic with period 4, repeating the pattern Plugging these in yields where the pattern repeats again. We see all the even terms are zero, and the odd terms alternate between and , so this gives
Our investigation here started from the most basic trigonometric identity - the sum of angles - and came up with an explicit formula for two natural functions and . But there is still a gap in our argument! We have not proved that the series we derived actually do satisfy the angle sum identities we began with! This is where the trigonometry final project picks up: it starts with these infinite series, and proves they are in fact trigonometric! In the optional project you also prove that these functions are periodic which gives us a rigorous definition of , as their half period.
Theorem 1 (From the Optional Project) The functions and defined above are trigonometric, and satisfy the trig identities. They are also periodic, and is their half period.
From here, its immediate using the angle-sum identity to show that the period of the trigonometric functions is the first zero of cosine. So we may also think of as being defined by the fact that is the first positive zero of . This helps us estimate its value.
Exercise 8 The first zero of is , so one might hope to use Newton’s method to produce an approximation for .
Find the function for Newton iteration, and use a calculator to compute iterates starting with . How many iterations do you need to preform to get 10 digits of accuracy?
Solution. This is just a numerical calculation, but you get there quickly: after 3 iterations already gives 10 digits of accuracy!
This of course is not very satisfying as we had to use a calculator to find values of and ! But we know enough to approximate these values with a series expansion!
Exercise 9 How many terms of the series expansions of , are needed to evaluate at to within ? (Estimate this using Taylor’s Error Formula)
Use this many terms of the series expansion to approximate the terms appearing in the first two iterations of Newtons method What is your approximate value for resulting from this?
Solution. To figure out how many terms of the series we need, we use the Taylor Error formula. This guarantees that the difference between the partial sum and at is equal to Where . Since both sine and cosine have all of their derivatives also being or and we know the sine and cosine to each be bounded in magnitude by 1, this means We want the error to be less than , so we are trying to solve : this is satisfied by , as (and this is the smallest such: gives ). Working to this level of precision we have
From here it is just a calculation:
Thus, we get our estimate for by doubling: