Jump to content

Linearity of differentiation

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Hellacioussatyr (talk | contribs) at 00:58, 9 May 2022. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In calculus, the derivative of any linear combination of functions equals the same linear combination of the derivatives of the functions;[1] this property is known as linearity of differentiation, the rule of linearity,[2] or the superposition rule for differentiation.[3] It is a fundamental property of the derivative that encapsulates in a single rule two simpler rules of differentiation, the sum rule (the derivative of the sum of two functions is the sum of the derivatives) and the constant factor rule (the derivative of a constant multiple of a function is the same constant multiple of the derivative).[4][5] Thus it can be said that differentiation is linear, or the differential operator is a linear operator.[6]

Statement and derivation

Let f and g be functions, with α and β constants. Now consider

By the sum rule in differentiation, this is

and by the constant factor rule in differentiation, this reduces to

Therefore,

Omitting the brackets, this is often written as:

Detailed Proofs/Derivations From Definition

We can prove the entire linearity principle at once, or, we can prove the individual steps (of constant factor and adding) individually. Here, both will be shown.

If we first prove linearity directly, we will have consequently have proven the constant coefficient/factor rule, the sum rule, and the difference rule. For the sum rule, you will set both the constant coefficients to be , which, when the statement is simplified, will be the sum of the two functions. For the difference rule, you will set the first constant coefficient to , and the second constant coefficient to , which, when simplified, will be the difference of the two functions. For the constant coefficient rule, we can just set either the second constant coefficient to , or the second function itself to be equal to , which, when simplified, will result in just being the first function being multiplied by a constant coefficient (Note: from a technical standpoint, you must take into consideration the domain of the second function - for simplicity, you could let the second function equal to the first function, and then have the second constant coefficient to be , as this will avoid any problems involving the minute details involving domain problems and undefined values. You could also define both the constant coefficient and the second function to be 0, where the domain of the second function is a superset of the first function -- there are loads of possibilities).

On the contrary, if we first prove the constant coefficient law and the sum law, we can prove linearity and the difference law/rule. We can do this by defining the first and second functions as being two other functions that are being multiplied by constant coefficients. Then, as shown in the derivation from the previous section, we can first use the sum law while differentiation, and then use the constant coefficient law, which will reach our conclusion for linearity. In order to prove the difference law, we would just have to have only the second function be redefined as another function, which is being multiplied by the constant coefficient of . This would, when simplified, give us the difference law/rule for differentiation.

(In the proofs/derivations below, [7][8] the coefficients are used -- they would correspond to the coefficients from above).

Linearity (Directly)

Let . Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of and ). Let be in the domain of . Let .

We want to prove that .

By definition, we can see that:

In order to use limits laws (sum of limits), we need to know that and both individually exist.

Now, for these smaller limits, to use limit laws (the coefficient law), we need to know that and both individually exist.

Now, notice that and . So, if we know that and both exist, we will know that and both individually exist. Great! Now that we know this, we can use the constant coefficient rule for limits, to see the following:

and

Now with this, we can go back to apply the limit law for the sum of limits, since we know that and both individually exist. From here, we can directly go back to the derivative we were working on.

Finally, we have shown what we claimed in the beginning: .

Sum

Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of and ). Let be in the domain of . Let .

We want to prove that .

By definition, we can see that:

In order to use limits laws (the sum of limits law) here, we need to show that the individual limits, and both exists, on their own.

Notice that and that .

That means that, when the derivatives ( and ) exist, it will mean that the limits exist (because the derivatives equal their respective limits). So, assuming that the derivatives exist, we can proceed, as that means that the limits exist.

Continuing from what we had above:

Thus, we have shown what we wanted to show, that: .

Difference

Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of and ). Let be in the domain of . Let .

We want to prove that .

By definition, we can see that:

In order to use limits laws (the difference of limits law) here, we need to show that the individual limits, and both exists, on their own.

Notice that and that .

That means that, when the derivatives ( and ) exist, it will mean that the limits exist (because the derivatives equal their respective limits). So, assuming that the derivatives exist, we can proceed, as that means that the limits exist.

Continuing from what we had above:

Thus, we have shown what we wanted to show, that: .

Constant Coefficient

Let be a function. Let ; will be the constant coefficient. Let be a function, where j is defined only where is defined. (In other words, the domain of is equal to the domain of ). Let be in the domain of . Let .

We want to prove that .

By definition, we can see that:

Now, in order to use a limit law (constant coefficient) to show that we will need to show that exists. However, notice that , by the definition of the derivative. So, if exists, that means that exists, because they are equal.

Thus, if we assume that exists, we can use the limit law and continue our proof.

Thus, we have proven what we wanted to prove, and, as we desired, have shown that when , it means that .

References

  1. ^ Blank, Brian E.; Krantz, Steven George (2006), Calculus: Single Variable, Volume 1, Springer, p. 177, ISBN 9781931914598.
  2. ^ Strang, Gilbert (1991), Calculus, Volume 1, SIAM, pp. 71–72, ISBN 9780961408824.
  3. ^ Stroyan, K. D. (2014), Calculus Using Mathematica, Academic Press, p. 89, ISBN 9781483267975.
  4. ^ Estep, Donald (2002), "20.1 Linear Combinations of Functions", Practical Analysis in One Variable, Undergraduate Texts in Mathematics, Springer, pp. 259–260, ISBN 9780387954844.
  5. ^ Zorn, Paul (2010), Understanding Real Analysis, CRC Press, p. 184, ISBN 9781439894323.
  6. ^ Gockenbach, Mark S. (2011), Finite-Dimensional Linear Algebra, Discrete Mathematics and Its Applications, CRC Press, p. 103, ISBN 9781439815649.
  7. ^ "Differentiation Rules". CEMC's Open Courseware. Retrieved 3 May 2022.
  8. ^ Dawkins, Paul. "Proof Of Various Derivative Properties". Paul's Online Notes. Retrieved 3 May 2022.