Logarithm

In mathematics, the logarithm of a number to a given base is the exponent to which the base must be raised in order to produce that number. For example, the logarithm of 1000 to base 10 is 3, because 10 to the power of 3 is 1000: 103 = 1000. The logarithm of x to the base b is written logb(x) such as log10(1000) = 3.
By the following formulas, logarithms reduce products to sums and powers to products:
Three values for the base are used most often. The logarithm with base b = 10 is called the common logarithm; its primary use was for calculations before calculators could handle multiplication, division, powers, and roots effectively. The natural logarithm, with base b = e, occurs in calculus and is the inverse of the exponential function. The binary logarithm with base b = 2 has applications in computing.
Logarithms have a number of generalizations. The complex logarithm, the inverse of the exponential function applied to complex numbers, generalizes the logarithm to complex numbers. The discrete logarithm generalizes it to cyclic groups and has applications in public-key cryptography.
John Napier invented logarithms in the early 17th century, and since then they have been used for many applications in mathematics and science. Logarithm tables were used extensively to perform calculations, until replaced in the latter half of the 20th century by electronic calculators and computers. Logarithmic scales reduce wide-ranging quantities to smaller scopes; for example the Richter scale. They also form the mathematical backbone of musical intervals and some models in psychophysics, and have been used in forensic accounting. In addition to being a standard function used in various scientific formulas, logarithms are used in determining the complexity of algorithms and of fractals, and in prime-counting functions.
Logarithm of positive real numbers
The logarithm of a number y with respect to a number b is the power to which b has to be raised in order to give y. The number b is called base. In symbols, the logarithm is the number x solving the following equation:[1]
The logarithm is denoted as
The logarithm logb(y) is defined for any positive number y and any positive base b which is unequal to 1. These restrictions are explained below. For b = 2 and y = 8, for example, this means
since 23 = 2 · 2 · 2 = 8. Another example is
since
The first equality is because a−1 is the reciprocal of a, 1/a, for any number a unequal to zero.[c] A third example: log10(150) is approximately 2.176. Indeed, 102 = 100 and 103 = 1000. As 150 lies between 100 and 1000, its logarithm lies between 2 and 3.
Logarithmic identities
There are several important formulas, sometimes called logarithmic identities, relating various logarithms to one another.[2]
Logarithm of product, quotient, power and root
The logarithm of a product is the sum of the two logarithms:
The logarithm of a division is the difference of the two logarithms:
The logarithm of the p-th power of a number is p times the logarithm of that number:
The logarithm of the p-th root of a number is the logarithm of that number divided by p:
Examples:
Change of base
The following formula relates logb(x), the logarithm of a number x to base b, to the logarithms of x and b with respect to an arbitrary base k:
Typical handheld calculators calculate the logarithms to bases k = 10 or k = e. Logarithms with respect to any base b can be determined using only base-k-logarithms by the following special case of the previous formula:
The following fact is also a consequence of base change: given a number x and its logarithm logb(x) to an unknown base b, b is given by the following formula:
Particular bases
Among all possible bases b, that is to say positive real numbers unequal to 1, a few particular choices for b are more commonly used. These are b = 10, b = e (the mathematical constant ≈ 2.71828), and b = 2. For example, base-10 logarithms are easy to use for manual calculations in the decimal number system,
On the other hand, mathematical analysis prefers the base b = e because of the analytical properties explained below. The following table lists common notations for logarithms to these bases and the fields where they are used. Many disciplines commonly write log(x) instead of logb(x), when the intended base can be determined from the context. The notations suggested by the International Organization for Standardization (ISO 31-11) are underlined in the table below.[4]
Base b | Name for logb(x) | Notations for logb(x) | Used in |
---|---|---|---|
2 | binary logarithm | lb(x),[5] ld(x), log(x) (in computer science), lg(x) |
computer science, information theory |
e | natural logarithm | ln(x),[a] log(x) (in mathematics and many programming languages[e]) |
mathematical analysis, physics, chemistry, statistics, economics and some engineering fields |
10 | common logarithm | lg(x), log(x) (in engineering, biology, astronomy), |
various engineering fields (see decibel and see below), logarithm tables, handheld calculators |
Analytic properties
A deeper study of logarithms requires the concept of function. In a nutshell, a function is a datum that assigns to a given number another number. For example, associating to any real number x the x-th power of b, bx, is a function f. This is written as
Here the base b is considered to be fixed, so the expression bx only depends on x.
Logarithm as a function

To justify the above definition of logarithms, it is necessary to show that the equation
actually has a solution x and that this solution is unique, provided that y is positive and that b is positive and unequal to 1. A proof of that fact requires some elementary calculus, specifically the intermediate value theorem.[6] This theorem says that a continuous function that takes two values m and n also takes any value that lies in between of m and n. A function is continuous if it does not "jump", that is, if its graph can be drawn without lifting the pen. This property can be shown to hold for the function f(x) = bx. Moreover, f takes arbitrarily big and arbitrarily small positive values, so that any number y > 0 lies between f(x0) and f(x1) for suitable x0 and x1. Thus, the intermediate value theorem ensures that the equation f(x) = y has a solution. Moreover, as the function f is strictly increasing (for b > 1), or strictly decreasing (for 0 < b < 1), there is only one solution to this equation.
The logarithm function or logarithmic function (or even just logarithm) assings to any positive real number y its base-b-logarithm logb(y). A compact way of rephrasing the point that logb(y) is the solution x to the equation f(x) = y is to say that the logarithm function is the inverse function of the function f. Inverse functions are closely related to the original functions: their graphs correspond to each other upon reflecting them at the diagonal line x = y, as shown at the right: a point (t, u = bt) on the graph of f yields a point (u, t = logbu) on the graph of the logarithm and vice versa. Moreover, analytic properties of the function pass to its inverse function.[6] Thus, as f(x) = bx is a continuous and differentiable function, so is its inverse function, logb(y). Roughly speaking, a differentiable function is one whose graph has no sharp "corners".
Derivative and antiderivative

Using that the natural logarithm ln(x) = loge(x) is the inverse function of ex, the chain rule implies that the derivative of ln(x) is given by
This implies that the antiderivative of 1/x is ln(x) + C. An early application of this fact was the quadrature of a hyperbolic sector by de Saint-Vincent in 1647, as shown at the right. The derivative with a generalised functional argument f(x) is
For this reason the quotient at the right hand side is called logarithmic derivative of f. The antiderivative of the natural logarithm ln(x) is
Derivatives and antiderivatives of logarithms to other bases can be derived therefrom using the formula for change of bases.
Integral representation of the natural logarithm

The natural logarithm of t agrees with the integral of 1/x dx from 1 to t:
That is to say, ln(t) equals the area between the x-axis and the function 1/x, ranging from x = 1 to x = t. This is depicted at the right. The formula is a consequence of the fundamental theorem of calculus and the above formula for the derivative of ln(x). Some authors actually use the right hand side of this equation as a definition of the natural logarithm and derive the formulas concerning logarithms of products and powers mentioned above from this definition.[8] The product formula ln(tu) = ln(t) + ln(u) is deduced in the following way:
The equality (1) used a splitting of the integral into two parts, the equality (2) is a change of variable (w = x/t). Geometrically, the splitting (1) corresponds to dividing the area into the shown yellow and blue part. Rescaling the left hand blue area in vertical direction by the factor t and shrinking it by the same factor in the horizontal direction does not change its size. Moving it appropriately, the area fits the graph of the function f(x) = 1/x again. Therefore, the left hand area, which is the integral of f(x) from t to tu is the same as the integral from 1 to u. This justifies the equality (2).

The power formula ln(tr) = r ln(t) is derived similarly:
The second equality is using a change of variables, w := x1/r, while the third equality follows from integration by substitution.
The sum over the reciprocals of natural numbers, the so-called harmonic series
is also closely tied to the natural logarithm: as n tends to infinity, the difference
converges (i.e., gets arbitrarily close) to a number known as Euler–Mascheroni constant. Little is known about it—not even whether it is a rational number or not. This relation is used in the performance analysis of algorithms such as quicksort.[9]
Calculation
There are a variety of ways to calculate logarithms. One method uses power series, that is to say sequences of polynomials whose values get arbitrarily close to the exact value of the logarithm. For high precision calculation of the natural logarithm, an approximation formula based on the arithmetic-geometric mean is used. Algorithms involving lookups of precalculated tables are used when emphasis is on speed rather than high accuracy. Moreover, the binary logarithm algorithm calculates lb(x) recursively based on repeated squarings of x, taking advantage of the relation
Finally, by means of quick ways to calculate the exponential function, the natural logarithm of x can also efficiently be calculated using Newton's method.[10]
Logarithms can be easy to compute in some cases, such as log10(10,000) = 4. However, they generally take less simple values: by the Gelfond–Schneider theorem, given two algebraic numbers a and b, the ratio ln(a) / ln(b) is either a rational number p / q (in which case aq = bp) or transcendental.[11] (Algebraic numbers are a certain generalization of rational numbers, for example the square root of 2 or are algebraic. Complex numbers that are not algebraic are called transcendental, for example π and e are such numbers. Almost all complex numbers are transcendental.) Related questions in transcendence theory such as linear forms in logarithms are a matter of current research.
Taylor series

For real numbers z satisfying 0 < z < 2,[b] the natural logarithm of z can be written as
The infinite sum means that the logarithm of z is approximated by the sums
to arbitrary precision, provided n big is enough. In the parlance of elementary calculus, ln(z) is the limit of the sequence of these sums. This series is the Taylor series expansion of the natural logarithm at z = 1.
More efficient series
Another series is
for complex numbers z with positive real part.[12] This series can be derived from the above Taylor series. It converges more quickly than the Taylor series, especially if z is close to 1. For example, for z = 1.5, the first three terms of the second series approximate ln(1.5) with an error of about 3·10−6. The above Taylor series needs 13 terms to achieve that precision. The quick convergence for z close to 1 can be taken advantage of in the following way: given a low-accuracy approximation y ≈ ln(z) and putting A = z/exp(y), the logarithm of z is
The better the initial approximation y is, the closer A is to 1, so its logarithm can be calculated efficiently. The calculation of A can be done using the exponential series, which converges quickly provided y is not too large. Calculating the logarithm of larger z can be reduced to smaller values of z by writing z = a · 10b, so that ln(z) = ln(a) + b · ln(10).
Arithmetic-geometric mean approximation
The natural logarithm ln(x) is approximated to a precision of 2−p (or p precise bits) by the following formula due to Gauss:
Here M denotes the arithmetic-geometric mean and m is chosen so that x / 2m is bigger than 2p/2. Both the arithmetic-geometric mean and the constants π and ln(2) can be calculated with quickly converging series.
Complex logarithm

The complex logarithm is a generalization of the above definition of logarithms of positive real numbers to complex numbers. Complex numbers are commonly represented as z = x + iy, where x and y are real numbers and i is the imaginary unit. Given any real number z > 0, the equation
has exactly one real solution a. However, there are infinitely many complex numbers a solving this equation, i.e., multiple complex numbers a whose exponential equals z. This causes complex logarithms to be different from real ones.
The solutions of the above equation are most readily described using the polar form of z. It encodes a complex number z by its absolute value, that is, the distance r to the origin, and the argument φ, the angle between the line connecting the origin and z and the x-axis. In terms of the trigonometric functions sine and cosine, r and φ are such that[15]

The absolute value r is uniquely determined by z by the formula
but there are multiple numbers φ such that the preceding equation holds—given one such φ, then φ' = φ + 2π also satisfies the preceding equation. Adding 2π or 360 degrees[d] to the argument corresponds to "winding" around the circle counter-clock-wise by an angle of 2π. However, there is exactly one argument φ satisfying −π < φ and φ ≤ π. It is called the principal argument and denoted Arg(z), with a capital A.[16] (The normalization 0 ≤ Arg(z) < 2π also appears in the literature.[17]) It is a fact proven in complex analysis that
Consequently, if φ is the principal argument Arg(z), the number
is such that the a-th power of e equals z, for any integer n. Accordingly, a is called the complex logarithm of z. If n = 0, a is called the principal value of the logarithm, denoted Log(z). The principal argument of any positive real number is 0; hence the principal logarithm of such a number is a real number and equals the real (natural) logarithm. In contrast to the real case, analogous formula for principal values of logarithm of products and powers for complex numbers do in general not hold.
The graph at the right depicts Log(z). The discontinuity, that is, the jump in the hue at the negative part of the x-axis, is due to the jump of the principal argument at this locus. This behavior can only be circumvented by dropping the range restriction on φ. Then the argument of z and, consequently, its logarithm become multi-valued functions.
Uses and occurrences

Logarithms have many applications, both within and outside mathematics. The logarithmic spiral, for example, appears (approximately) in various guises in nature, such as the shells of nautilus.[18] Logarithms also occur in various scientific formulas, such as the Tsiolkovsky rocket equation, the Fenske equation, or the Nernst equation.
The common logarithm log10(x) is linked to the number n of numerical digits of x: n is the smallest integer strictly bigger than log10(x), i.e., the one satisfying the inequalities:
Logarithmic scale

Various scientific quantities are expressed as logarithms of other quantities, a concept known as logarithmic scale. For example, the Richter scale measures the strength of an earthquake by taking the common logarithm of the energy emitted at the earthquake. Thus, an increase in energy by a factor of 10 adds one to the Richter magnitude; a 100-fold energy results in +2 in the magnitude etc.[19] This way, large-scaled quantities are reduced to much smaller ranges. A second example is the pH in chemistry: it is the negative of the base-10 logarithm of the activity of hydronium ions (H3O+, the form H+ takes in water).[20] The activity of hydronium ions in neutral water is 10−7 mol·L−1 at 25 °C, hence a pH of 7. On the other hand, vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 104 of the activity, that is, vinegar's hydronium ion activity is about 10−3 mol·L−1. In a similar vein, the decibel is a unit of measure which is the base-10 logarithm of ratios. For example it is used to quantify the loss of voltage levels in transmitting electrical signals[21], to describe power levels of sounds in acoustics,[22] or the absorbance of light in the fields of spectrometry and optics. The apparent magnitude also measures the brightness of stars logarithmically.[23]
Semilog graphs or log-linear graphs use this concept for visualization: one axis, typically the vertical one, is scaled logarithmically. This way, exponential functions of the form f(x) = a · bx appear as straight lines with slope proportional to b. In a similar vein, log-log graphs scale both axes logarithmically.[24]
Psychology
In psychophysics, the Weber–Fechner law proposes a logarithmic relationship between stimulus and sensation.[25] (However, it is often considered to be less accurate than Stevens' power law.[26]) According to that model, the smallest noticeable change ΔS of some stimulus S is proportional to S. This gives rise to logarithms by the above relation of the natural logarithm and the integral over dS / S. Hick's law proposes a logarithmic relation between the time individuals take for choosing an alternative and the number of choices they have.[27]
Mathematically untrained individuals tend to estimate numerals with a logarithmic spacing, i.e., the position of a presented numeral correlates with the logarithm of the given number so that smaller numbers are given more space than bigger ones. With increasing mathematical training this logarithmic representation is gradually superseeded by a linear one. This finding has been confirmed both in comparing second to sixth grade Western school children,[28] as well as in comparison between American and indigene cultures.[29]
Probability theory and statistics

Logarithms also arise in probability theory: tossing a coin repeatedly, it is known (by the law of large numbers), that the heads-to-tails ratio approaches 0.5 as the number of tosses increases. The fluctuations of this ratio around the limiting value are quantified by the law of the iterated logarithm.
Logarithms are used in the process of maximum likelihood estimation when applied to a sample consisting of independent random variables: maximizing the product of the random variables is equivalent to maximizing the logarithm of the product, and in so doing one differentiates a sum rather than a product.[30]
Benford's law, an empirical statistical description of the occurrence of digits in certain real-life data sources, such as heights of buildings, is based on logarithms: the probability that the first decimal digit of an item in the data sample is d (from 1 to 9) equals log10(d + 1) − log10(d), irrespective of the unit of measurement.[31] Thus, according to that law, about 30% of the data can be expected to have 1 as first digit, 18% start with 2 etc. Deviations from this pattern can be used to detect fraud in accounting.[32]
A log-normal distribution is one whose logarithm is normally distributed.[33]
Complexity
Complexity theory is a branch of computer science studying the performance of algorithms.[34] Here, logarithms are prone to occur in describing algorithms which divide a problem into two smaller ones, and join the solution of the subproblems.[35] For example, to find a number in an sorted list, the binary search algorithm checks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm usually needs about log(N) comparisons, where N is the length of the list. Similarly, the merge sort algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Sort algorithms typically require a time approximately proportional to N · log(N).
A function f(x) is said to grow logarithmically, if it is (sometimes approximately) proportional to the logarithm. (Biology, in describing growth of organisms, uses this term for an exponential function, though.[36]) It is irrelevant to which base the logarithm is taken, since choosing a different base amounts to multiplying the result by a constant, as follows from the change-of-base formula above. For example, any natural number N can be represented in binary form in no more than (log2(N)) + 1 bits. In other words, the amount of hard disk space needed to store N grows logarithmically as a function of N. Corresponding calculations carried out using loge will lead to results in nats which may lack this intuitive interpretation. The change amounts to a factor of loge2≈0.69—twice as many values can be encoded with one additional bit, which corresponds to an increase of about 0.69 nats. A similar example is the relation of decibel, using a common logarithm vis-à-vis neper, based on a natural logarithm.
Entropy
Entropy, broadly speaking a measure of (dis-)order of some system, also relies on logarithms. In thermodynamics, the entropy S of some physical system is defined by
The sum is over all states i the system in question can have, pi is the probability that the state i is attained and k is the Boltzmann constant. Similarly, entropy in information theory is a measure of quantity of information. If a message recipient may expect any one of N possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as log2 N bits.[37]
Fractals

Logarithms occur in various definitions of the dimension of fractals.[38] Fractals are geometric objects that are self-similar, i.e., that have small parts which reproduce (at least roughly) the entire global structure. The Sierpinski triangle, for example, can be covered by three copies of it having half the original size. This causes the Hausdorff dimension of this structure to be log(3)/log(2) ≈ 1.58. The idea of scaling invariance is also inherent to Benford's law mentioned above. Another logarithm-based notion of dimension is obtained by counting the number of boxes needed to cover the fractal in question.
Number theory
Natural logarithms are closely linked to counting prime numbers, an important topic in number theory. For any given integer x, the quantity of prime numbers less than or equal to x is denoted π(x). In its simplest form, the prime number theorem asserts that π(x) is approximately given by
in the sense that the ratio of π(x) and that fraction approaches 1 when x tends to infinity.[39] This can be rephrased by saying that the probability that a randomly chosen number between 1 and x is prime is indirectly proportional to the numbers of decimal digits of x. A far better estimate of π(x) is given by the offset logarithmic integral function Li(x), defined by
The Riemann hypothesis, one of the oldest open mathematical conjectures, can be stated in terms of comparing π(x) and Li(x).[40] The Erdős–Kac theorem describing the number of distinct prime factors also involves the natural logarithm.
By the formula calculating logarithms of products, the logarithm of n factorial, n! = 1 · 2 · ... · n, is given by
This can be used to obtain Stirling's formula, an approximation for n! for large n.[41]
Music
Logarithms are related to musical tones and intervals. In equal temperament, the frequency ratio depends only on the interval between two tones, not on the specific frequency, or pitch of the individual tones. Therefore, logarithms can be used to describe the intervals: an interval is measured in semitones by taking the base-21/12 logarithm of the frequency ratio, while the base-21/1200 logarithm of the frequency ratio expresses the interval in cents, hundredths of a semitone. The latter is used finer encoding, as it is needed for non-equal temperaments.[42] The table below lists some musical intervals together with the frequency ratios and their logarithms.
Interval (the two tones are played at the same time) |
1/72 tone ⓘ | Semitone ⓘ | Just major third ⓘ | Major third ⓘ | Tritone ⓘ | Octave ⓘ |
Frequency ratio r | ||||||
Corresponding number of semitones |
||||||
Corresponding number of cents |
Related notions
The cologarithm of a number is the logarithm of the reciprocal of the number: cologb(x) = logb(1/x) = −logb(x).[43] The antilogarithm function antilogb(y) is the inverse function of the logarithm function logb(x); it can be written in closed form as by.[44] Both terminologies are primarily found in older books.
The double or iterated logarithm, ln(ln(x)), is the inverse function of the double exponential function. The super- or hyper-4-logarithm is the inverse function of tetration. The super-logarithm of x grows even more slowly than the double logarithm for large x. The Lambert W function is the inverse function of ƒ(w) = wew.
From the perspective of pure mathematics, the identity log(cd) = log(c) + log(d) expresses an isomorphism between the multiplicative group of the positive real numbers and the group of all the reals under addition. By means of that isomorphism, the Lebesgue measure dx on R corresponds to the Haar measure dx/x on the positive reals.[45] Logarithmic functions are the only continuous isomorphisms from the multiplicative group of positive real numbers to the additive group of real numbers.[46] In complex analysis and algebraic geometry, differential forms of the form (d log(f) =) df/f are known as forms with logarithmic poles.[47] This notion in turn gives rise to concepts such as logarithmic pairs, log terminal singularities or logarithmic geometry.
The polylogarithm is the function defined by
It is related to the natural logarithm by Li1(z) = −ln(1 − z). Moreover, Lis(1) equals the Riemann zeta function ζ(s).
Exponentiation occurs in many areas of mathematics and its inverse function is often referred to as the logarithm. For example, the logarithm of a matrix is the (multi-valued) inverse function of the matrix exponential.[49] Another example is the p-adic logarithm, the inverse function to the p-adic exponential. Both these functions are defined in terms of Taylor series analogous to the real case. Unlike the real case, though, the p-adic logarithm can be extended to all non-zero p-adic numbers.[50]
The discrete logarithm is a related notion in the theory of finite groups. It involves solving the equation bn = x, where b and x are elements of the group, and n is an integer specifying a power in the group operation. Zech's logarithm is related to the discrete logarithm in the multiplicative group of non-zero elements of a finite field.[51] For some finite groups, it is believed that the discrete logarithm is very hard to calculate, whereas discrete exponentials are quite easy. This asymmetry has applications in public key cryptography, more specifically in elliptic curve cryptography.[52]
History
Predecessors
The Indian mathematician Virasena worked with the concept of ardhaccheda: the number of times a number could be halved; effectively similar to the integer part of logarithms to base 2. He described relations such as the above product formula and also introduced logarithms in base 3 (trakacheda) and base 4 (caturthacheda).[53][54]Michael Stifel published Arithmetica integra in Nuremberg in 1544; it contains a table of integers and powers of 2 that some have considered to be an early version of a logarithmic table.[55][56]
John Napier

The method of logarithms was publicly propounded in 1614, in a book entitled Mirifici Logarithmorum Canonis Descriptio, by John Napier.[57] (Joost Bürgi independently discovered logarithms; however, he did not publish his discovery until four years after Napier).
By repeated subtractions Napier calculated 107(1 − 10−7)L for L ranging from 1 to 100. The result for L=100 is approximately 0.99999 = 1 − 10−5. Napier then calculated the products of these numbers with 107(1 − 10−5)L for L from 1 to 50 and did similarly with 0.9995 ≈ (1 − 10−5)20 and 0.99 ≈ 0.99520. These computations, which occupied Napier for 20 years, allowed him to give, for any number N between 5,000,000 and 107, the number L solving the equation
Napier first called L an "artificial number", but later introduced the word "logarithm" to mean a number that indicates a ratio: Template:Polytonic (logos) meaning proportion, and Template:Polytonic (arithmos) meaning number. In modern notation:
because
However, the number e and the modern definition for logarithms were developed about 100 years later, by Euler in 1728.[58][59]
The invention of logarithms was quickly met with acclaim in many countries. Laplace called logarithms
[a]n admirable artifice which, by reducing to a few days the labour of many months, doubles the life of the astronomer, and spares him the errors and disgust inseparable from long calculations.[60]
The work of Cavalieri (Italy), Wingate (France), Fengzuo (China), and Kepler's Chilias logarithmorum (Germany) helped spread the concept further.[61]
Tables of logarithms and historical applications
Logarithms contributed to the advance of science, and especially of astronomy. Logarithms were used constantly in surveying, celestial navigation, and other scientific branches. A key tool for their practical use were logarithm tables. For a fixed base b (usually b = 10), these tables list the values of logb(x) and bx for any number x in a certain range, up to a certain precision. Given two positive numbers c and d, cd and c/d were calculated as follows: first, the logarithms logbc and logbd were looked up. Secondly, the sum of the two logarithms (or their difference, respectively) was calculated. This is an easy operation. Finally, raising b to the power of this sum (or difference) was again done by looking up the table. The result of this is indeed the cd (or c/d), as shown by the formulas
For manual calculations that demand any appreciable precision, this process, requiring three lookups and a sum, is much faster than performing the multiplication by any previously known method such as prosthaphaeresis, which relies on trigonometric identities. With the advent of calculators and computers, logarithm tables fell into disuse. Calculation of powers and roots are reduced to multiplications or divisions and look-ups by
For different needs, logarithm tables ranging from small handbooks to multi-volume editions have been compiled:
Year | Author | Range | Decimal places | Note |
---|---|---|---|---|
1617 | Henry Briggs | 1–1000 | 8 | |
1624 | Henry Briggs Arithmetica Logarithmica | 1–20,000, 90,000–100,000 | 14 | |
1628 | Adriaan Vlacq | 20,000–90,000 | 10 | contained only 603 errors[62] |
1792–94 | Gaspard de Prony Tables du Cadastre | 1–100,000 and 100,000–200,000 | 19 and 24, respectively | "seventeen enormous folios",[63] never published |
1794 | Jurij Vega Thesaurus Logarithmorum Completus (Leipzig) | corrected edition of Vlacq's work | ||
1795 | François Callet (Paris) | 100,000–108,000 | 7 | |
1871 | Sang | 1–200,000 | 7 |
Another critical application was the slide rule. The non-sliding logarithmic scale or Gunter's rule was invented by Edmund Gunter shortly after Napier's invention. This design was improved by William Oughtred into the slide rule: a pair of logarithmic scales movable with respect to each other. The slide rule was an essential calculating tool for engineers and scientists until the 1970s since it allows much faster computation than techniques based on tables.[58] The slide rule provides somewhat less precision than typical table-based calculations, but enough for many types of engineering.
See also
Notes
^ a: Some mathematicians disapprove of this notation. In his 1985 autobiography, Paul Halmos criticized what he considered the "childish ln notation," which he said no mathematician had ever used.[64]
In fact, the notation was invented by a mathematician, Irving Stringham.[65][66]
^ b: The same series holds the principal value of the complex logarithm for complex numbers z satisfying |z − 1| < 1.
^ c: For rules concerning exponentiation such as b−1 = 1/b, bm + n = bm · bn, see exponentiation or [67] for an elementary treatise.
^ d: See radian for the conversion between 2π and 360 degrees.
^ e: For example C, Java, Haskell, and BASIC.
References
- ^ Kate, S.K.; Bhapkar, H.R. (2009), Basics Of Mathematics, Technical Publications, ISBN 9788184317558, see Chapter 1
- ^ All statements in this section can be found in (Shailesh Shirali 2002), (Douglas Downing 2003), or (Kate & Bhapkar 2009), for example.
- ^ Downing, Douglas (2003), Algebra the Easy Way, Barron's Educational Series, ISBN 978-0-7641-1972-9, chapter 17, p. 275
- ^ B. N. Taylor (1995). "Guide for the Use of the International System of Units (SI)". NIST Special Publication 811, 1995 Edition. US Department of Commerce.
- ^ Gullberg, Jan (1997), Mathematics: from the birth of numbers., W. W. Norton & Co, ISBN 039304002X
- ^ a b Lang, Serge (1997), Undergraduate analysis, Undergraduate Texts in Mathematics (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-94841-6, MR1476913, section III.3
- ^ Lang 1997, section IV.2
- ^ Courant, Richard (1988), Differential and integral calculus. Vol. I, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-60842-4, MR1009558, see Section III.6
- ^ Havil, Julian (2003), Gamma: Exploring Euler's Constant, Princeton University Press, ISBN 978-0-691-09983-5, see sections 11.5 and 13.8
- ^ Zhang, M.; Delgado-Frias, J.G.; Vassiliadis, S. (1994), "Table driven Newton scheme for high precision logarithm generation" (PDF), IEE Proceedings Computers & Digital Techniques, 141 (5): 281–292, doi:10.1049/ip-cdt:19941268, ISSN 1350-387
{{citation}}
: Check|issn=
value (help), see section 1 for an overview - ^ Baker, Alan (1975), Transcendental number theory, Cambridge University Press, ISBN 978-0-521-20461-3, page 10
- ^ a b Abramowitz & Stegun, eds. 1972, p. 68
- ^ Sasaki, T.; Kanada, Y., "Practically fast multiple-precision evaluation of log(x)", Journal of Information Processing, 5 (1982): 247–250
- ^ Ahrendt, Timm (1999), Fast computations of the exponential function, Lecture notes in computer science, vol. 1564, pp. 302–312, doi:10.1007/3-540-49116-3_28
- ^ Moore, Theral Orvis; Hadlock, Edwin H. (1991), Complex analysis, World Scientific, ISBN 9789810202460, see Section 1.2
- ^ Ganguly, S. (2005), Elements of Complex Analysis, Academic Publishers, ISBN 9788187504863, Definition 1.6.3
- ^ Nevanlinna, Rolf Herman; Paatero, Veikko (2007), Introduction to complex analysis, AMS Bookstore, ISBN 978-0-8218-4399-4, section 5.9
- ^ Maor, Eli (2009), E: The Story of a Number, Princeton University Press, ISBN 978-0-691-14134-3, see page 135
- ^ Crauder, Bruce; Evans, Benny; Noell, Alan (2008), Functions and Change: A Modeling Approach to College Algebra (4th ed.), Cengage Learning, ISBN 978-0-547-15669-9, section 4.4.
- ^ IUPAC (1997), A. D. McNaught, A. Wilkinson (ed.), Compendium of Chemical Terminology ("Gold Book") (2nd ed.), Oxford: Blackwell Scientific Publications, doi:10.1351/goldbook, ISBN 0-9678550-9-8
- ^ Bakshi, U. A. (2009), Telecommunication Engineering, Technical Publications, ISBN 9788184317251, see Section 5.2
- ^ Maling, George C. (2007), "Noise", in Rossing, Thomas D. (ed.), Springer handbook of acoustics, Berlin, New York: Springer-Verlag, ISBN 978-0-387-30446-5, section 23.0.2
- ^ Bradt, Hale (2004), Astronomy methods: a physical approach to astronomical observations, Cambridge Planetary Science, Cambridge University Press, ISBN 978-0-521-53551-9, see section 8.3, p. 231
- ^ Bird, J. O. (2001), Newnes engineering mathematics pocket book (3rd ed.), Oxford: Newnes, ISBN 978-0-7506-4992-6, see section 34
- ^ Boring, Edwin Garrigues (2007), Psychology – A Factual Textbook, Lightning Source Inc, ISBN 978-1-4067-4750-8, p. 196—201
- ^ Nadel, Lynn (2005), Encyclopedia of cognitive science, New York: John Wiley & Sons, ISBN 978-0-470-01619-0, see the lemmas Psychophysics and Perception: Overview
- ^ Welford, A. T. (1968), Fundamentals of skill, London: Methuen, ISBN 978-0-416-03000-6, OCLC 219156, p. 61
- ^ Siegler, Robert S.; Opfer, John E. (2003), "The Development of Numerical Estimation. Evidence for Multiple Representations of Numerical Quantity", Psychological Science, 14 (3): 237–43, doi:10.1111/1467-9280.02438
- ^ Dehaene, Stanislas; Izard, Véronique; Spelke, Elizabeth; Pica, Pierre (2008), "Log or Linear? Distinct Intuitions of the Number Scale in Western and Amazonian Indigene Cultures", Science, 320 (5880): 1217–1220, doi:10.1126/science.1156540, PMC 2610411, PMID 18511690
- ^ Rose, Colin; Smith, Murray D. (2002), Mathematical statistics with Mathematica, Springer texts in statistics, Berlin, New York: Springer-Verlag, ISBN 978-0-387-95234-5, section 11.3
- ^ Tabachnikov, Serge (2005), Geometry and Billiards, Providence, R.I.: American Mathematical Society, pp. 36–40, ISBN 978-0-8218-3919-5, see Section 2.1
- ^ Durtschi, Cindy; Hillison, William; Pacini, Carl (2004), "The Effective Use of Benford's Law in Detecting Fraud in Accounting Data" (PDF), Journal of Forensic Accounting, V: 17–34.
- ^ Aitchison, J.; Brown, J. A. C. (1969), The lognormal distribution, Cambridge University Press, ISBN 978-0-521-04011-2, OCLC 301100935
- ^ Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York: Springer-Verlag, ISBN 978-3-540-21045-0, section 2.4.
- ^ Harel, David; Feldman, Yishai A. (2004), Algorithmics: the spirit of computing, Addison-Wesley, ISBN 978-0-321-11784-7, p. 143
- ^ Mohr, Hans; Schopfer, Peter (1995), Plant physiology, Berlin, New York: Springer-Verlag, ISBN 978-3-540-58016-4, see Chapter 19, p. 298
- ^ Eco, Umberto (1989), The open work, Harvard University Press, ISBN 978-0-674-63976-8, see section III.I
- ^ Helmberg, Gilbert (2007), Getting acquainted with fractals, De Gruyter Textbook, Walter de Gruyter, ISBN 978-3-11-019092-2
- ^ Bateman, P. T.; Diamond, Harold G. (2004), Analytic number theory: an introductory course, World Scientific, ISBN 9789812560803, OCLC 492669517, see Theorem 4.1
- ^ P. T. Bateman & Diamond 2004, Theorem 8.15
- ^ Slomson, Alan B. (1991), An introduction to combinatorics, London: CRC Press, ISBN 978-0-412-35370-3, see Chapter 4
- ^ Wright, David (2009), Mathematics and music, AMS Bookstore, ISBN 978-0-8218-4873-9, see Chapter 5
- ^ Wooster, Woodruff B; Smith, David E (1902), Academic Algebra, Ginn & Company, p. 360
- ^ Abramowitz, Milton; Stegun, Irene A., eds. (1972), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover Publications, ISBN 978-0-486-61272-0, tenth printing, see section 4.7., p. 89
- ^ Ambartzumian, R. V. (1990), Factorization calculus and geometric probability, Cambridge University Press, ISBN 978-0-521-34535-4, see Section 1.4
- ^ Bourbaki, Nicolas (1998), General topology. Chapters 5—10, Elements of Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-64563-4, MR1726872, see section V.4.1
- ^ Esnault, Hélène; Viehweg, Eckart (1992), Lectures on vanishing theorems, DMV Seminar, vol. 20, Birkhäuser Verlag, ISBN 978-3-7643-2822-1, MR1193913, see section 2
- ^ Apostol, T.M. (2010), "Logarithm", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248
{{citation}}
: Invalid|ref=harv
(help). - ^ Higham, Nicholas (2008), Functions of Matrices. Theory and Computation, SIAM, ISBN 978-0-89871-646-7, see Chapter 11.
- ^ Neukirch, Jürgen (1999). Algebraische Zahlentheorie. Grundlehren der mathematischen Wissenschaften. Vol. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021., Section II.5.
- ^ Lidl, Rudolf; Niederreiter, Harald (1997), Finite fields, Cambridge University Press, ISBN 978-0-521-39231-0
- ^ Stinson, Douglas Robert (2006), Cryptography: Theory and Practice (3rd ed.), London: CRC Press, ISBN 978-1-58488-508-5
- ^ Gupta, R. C. (2000), "History of Mathematics in India", in Hoiberg, Dale; Ramchandani (eds.), Students' Britannica India: Select essays, Popular Prakashan, p. 329
{{citation}}
:|editor3-first=
missing|editor3-last=
(help) - ^ Singh, A. N., Lucknow University http://www.jainworld.com/JWHindi/Books/shatkhandagama-4/02.htm
{{citation}}
: Missing or empty|title=
(help); Unknown parameter|unused_data=
ignored (help) - ^ Walter William Rouse Ball (1908), A short account of the history of mathematics, Macmillan and Co, p. 216
- ^ Vivian Shaw Groza and Susanne M. Shelley (1972), Precalculus mathematics, 9780030776700, p. 182, ISBN 9780030776700
- ^ Ernest William Hobson. John Napier and the invention of logarithms. 1614. The University Press, 1914.
- ^ a b Maor 2009, section 1
- ^ Eves, Howard Whitley (1992), An introduction to the history of mathematics, The Saunders series (6th ed.), Philadelphia: Saunders, ISBN 978-0-03-029558-4, section 9-3
- ^ Bryant, Walter W., A History of Astronomy (PDF), Forgotten Books, ISBN 978-1-4400-5792-2, page 44
- ^ Maor 2009, section 2
- ^ "this cannot be regarded as a great number, when it is considered that the table was the result of an original calculation, and that more than 2,100,000 printed figures are liable to error.", Athenaeum, 15 June 1872. See also the Monthly Notices of the Royal Astronomical Society for May 1872.
- ^ English Cyclopaedia, Biography, Vol. IV., article "Prony."
- ^ Paul Halmos (1985), I Want to Be a Mathematician: An Automathography, Springer-Verlag, ISBN 978-0387960784
- ^ Irving Stringham (1893), Uniplanar algebra: being part I of a propædeutic to the higher mathematical analysis, The Berkeley Press, p. xiii
- ^ Roy S. Freedman (2006), Introduction to Financial Technology, Academic Press, p. 59, ISBN 9780123704788
- ^ Shirali, Shailesh (2002), A Primer on Logarithms, Universities Press, ISBN 9788173714146, esp. section 2
External links
- Educational video on logarithms, retrieved 12 October 2010
- Translation of Napier's work on logarithms, retrieved 12 October 2010