Dictionary Definition
integral adj
1 existing as an essential constituent or
characteristic; "the Ptolemaic system with its built-in concept of
periodicity"; "a constitutional inability to tell the truth" [syn:
built-in,
constitutional,
inbuilt, inherent]
2 constituting the undiminished entirety; lacking
nothing essential especially not damaged; "a local motion keepeth
bodies integral"- Bacon; "was able to keep the collection entire
during his lifetime"; "fought to keep the union intact" [syn:
entire, intact] n : the result of a
mathematical integration; F(x) is the integral of f(x) if dF/dx =
f(x)
User Contributed Dictionary
English
Pronunciation
Etymology
From integerAdjective
Derived terms
- definite integral
- indefinite integral
- integral brick
- integral calculus
- integral closure
- integral cosmology
- integral cuboid
- integral current
- integral curvature
- integral curve
- integral domain
- integral drawing
- integral ecology
- integral element
- integral energy
- integral equation
- integral extension
- integral fast reactor
- integral field unit
- integral function
- integral geometry
- integral graph
- integral homology group
- integral kernel
- integral membrane protein
- integral politics
- integral polygedron
- integral polynomial
- integral post-metaphysics
- integral psychology
- integral theory
- integral transform
- integral transformative practice
- integral yoga
Synonyms
Translations
- Dutch: integraal
- Finnish: oleellinen (oleellinen osa) (1)
- German: integral (1)
- Hungarian: integrál (2)
- Swedish: heltals- (2)
Noun
- A numerical measure computed by a limiting process in which the domain of a function is divided into small subintervals and the value of the function at a point in each subinterval is multiplied by the measurement of that subinterval, all these products then being summed.
- the result of summation of the product of a function and an infinitesimal.
Synonyms
- (in analysis): antiderivative
Derived terms
Related terms
Translations
notion in mathematics
See also
Portuguese
Noun
integral f- integral (in analysis)
Swedish
Noun
integral- integral (in analysis)
Extensive Definition
- The word "integral" (adjective) can also mean: "being an integer".
- \int_a^b f(x)\,dx
is equal to the area of a
region in the xy-plane bounded by the graph
of f, the x-axis, and the vertical lines x = a and x = b, with
areas below the x-axis being subtracted.
The term "integral" may also refer to the notion
of antiderivative, a
function F whose derivative is the given
function f. In this case it is called an indefinite integral, while
the integrals discussed in this article are termed definite
integrals. Some authors maintain a distinction between
antiderivatives and indefinite integrals.
The principles of integration were formulated by
Isaac
Newton and Gottfried
Leibniz in the late seventeenth century. Through the
fundamental theorem of calculus, which they independently
developed, integration is connected with differentiation:
if f is a continuous real-valued function defined on a closed
interval [a, b], then, once an antiderivative F of f is known,
the definite integral of f over that interval is given by:
-
-
- \int_a^b f(x)\,dx = F(b) - F(a).
-
Integrals and derivatives became the basic tools
of calculus, with
numerous applications in science and engineering. A rigorous
mathematical definition of the integral was given by Bernhard
Riemann. It is based on a limiting
procedure which approximates the area of a curvilinear region by
breaking the region into thin vertical slabs. Beginning in the
nineteenth century, more sophisticated notions of integral began to
appear, where the type of the function as well as the domain over
which the integration is performed has been generalised. A line
integral is defined for functions of two or three variables,
and the interval of integration [a,b] is replaced by a certain
curve connecting two
points on the plane or in the space. In a surface
integral, the curve is replaced by a piece of a surface in the three-dimensional
space. Integrals of differential
forms play a fundamental role in modern differential
geometry. These generalizations of integral first arose from
the needs of physics,
and they play an important role in the formulation of many physical
laws, notably those of electrodynamics.
Modern concepts of integration are based on the abstract
mathematical theory known as Lebesgue
integration, developed by Henri
Lebesgue.
History
see also History of calculusPre-calculus integration
Integration can be traced as far back as ancient Egypt, circa 1800 BC, with the Moscow Mathematical Papyrus demonstrating knowledge of a formula for the volume of a pyramidal frustum. The first documented systematic technique capable of determining integrals is the method of exhaustion of Eudoxus (circa 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of shapes for which the area or volume was known. This method was further developed and employed by Archimedes and used to calculate areas for parabolas and an approximation to the area of a circle. Similar methods were independently developed in China around the 3rd Century AD by Liu Hui, who used it to find the area of the circle. This method was later used by Zu Chongzhi to find the volume of a sphere. Some ideas of integral calculus are found in the Siddhanta Shiromani, a 12th century astronomy text by Indian mathematician Bhāskara II.Significant advances on techniques such as the
method of exhaustion did not begin to appear until the 16th century
AD. At this time the work of Cavalieri
with his method of indivisibles, and work by Fermat,
began to lay the foundations of modern calculus. Further steps were
made in the early 17th century by Barrow and
Torricelli,
who provided the first hints of a connection between integration
and differentiation.
Newton and Leibniz
The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Newton and Leibniz. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Newton and Leibniz developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions within continuous domains. This framework eventually became modern Calculus, whose notation for integrals is drawn directly from the work of Leibniz.Formalizing integrals
While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigor. Bishop Berkeley memorably attacked infinitesimals as "the ghosts of departed quantity". Calculus acquired a firmer footing with the development of limits and was given a suitable foundation by Cauchy in the first half of the 19th century. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann integrable on a bounded interval, subsequently more general functions were considered, to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory. Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed.Notation
Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with \dot or x'\,\!, which Newton used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted.The modern notation for the indefinite integral
was introduced by Gottfried
Leibniz in 1675 (; ). He adapted the integral symbol, "∫", from
an elongated
letter S, standing for summa (Latin for "sum" or "total"). The
modern notation for the definite integral, with limits above and
below the integral sign, was first used by Joseph
Fourier in Mémoires of the French Academy around 1819–20,
reprinted in his book of 1822 (; ). In
Arabic mathematical notation which is written from right to
left, an inverted integral symbol To start off, consider the curve
y = f(x) between x = 0 and
x = 1, with
f(x) = √x. We ask:
- What is the area under the function f, in the interval from 0 to 1?
- \int_0^1 \sqrt x \, dx \,\!.
As a first approximation, look at the unit square
given by the sides x = 0 to
x = 1 and
y = f(0) = 0 and
y = f(1) = 1. Its area is
exactly 1. As it is, the true value of the integral must be
somewhat less. Decreasing the width of the approximation rectangles
shall give a better result; so cross the interval in five steps,
using the approximation points 0, 1⁄5, 2⁄5, and
so on to 1. Fit a box for each step using the right end height of
each curve piece, thus √1⁄5,
√2⁄5, and so on to
√1 = 1. Summing the areas of these
rectangles, we get a better approximation for the sought integral,
namely
- \textstyle \sqrt \left ( \frac - 0 \right ) + \sqrt \left ( \frac - \frac \right ) + \cdots + \sqrt \left ( \frac - \frac \right ) \approx 0.7497\,\!
Notice that we are taking a sum of finitely many
function values of f, multiplied with the differences of two
subsequent approximation points. We can easily see that the
approximation is still too large. Using more steps produces a
closer approximation, but will never be exact: replacing the 5
subintervals by twelve as depicted, we will get an approximate
value for the area of 0.6203, which is too small. The key idea is
the transition from adding finitely many differences of
approximation points multiplied by their respective function values
to using infinitely fine, or infinitesimal steps.
As for the actual calculation of integrals, the
fundamental theorem of calculus, due to Newton and Leibniz, is
the fundamental link between the operations of differentiating
and integrating. Applied to the square root curve, f(x) = x1/2, it
says to look at the related function F(x) =
2⁄3x3/2, and simply take F(1) − F(0), where 0
and 1 are the boundaries of the interval
[0,1]. (This is a case of a general rule, that for f(x) =
xq, with q ≠ −1, the related function, the
so-called antiderivative is
F(x) = (xq+1)/(q + 1).) So the exact value of the
area under the curve is computed formally as
- \int_0^1 \sqrt x \cdot dx = \int_0^1 x^ \cdot dx = \int_0^1 d \left( x^\right) = .
The notation
- \int f(x) \, dx \,\!
Historically, after the failure of early efforts
to rigorously interpret infinitesimals, Riemann formally defined
integrals as a limit
of weighted sums, so that the dx suggested the limit of a
difference (namely, the interval width). Shortcomings of Riemann's
dependence on intervals and continuity motivated newer definitions,
especially the Lebesgue
integral, which is founded on an ability to extend the idea of
"measure" in much more flexible ways. Thus the notation
- \int_A f(x) \, d\mu \,\!
Differential
geometry, with its "calculus on manifolds", gives the familiar
notation yet another interpretation. Now f(x) and dx become a
differential
form, ω = f(x) dx, a new
differential
operator d, known as the exterior
derivative appears, and the fundamental theorem becomes the
more general Stokes'
theorem,
- \int_ \bold \omega = \int_ \omega , \,\!
More recently, infinitesimals have reappeared
with rigor, through modern innovations such as non-standard
analysis. Not only do these methods vindicate the intuitions of
the pioneers, they also lead to new mathematics.
Although there are differences between these
conceptions of integral, there is considerable overlap. Thus the
area of the surface of the oval swimming pool can be handled as a
geometric ellipse, as a sum of infinitesimals, as a Riemann
integral, as a Lebesgue integral, or as a manifold with a
differential form. The calculated result will be the same for
all.
Formal definitions
There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but also occasionally for pedagogical reasons. The most commonly used definitions of integral are Riemann integrals and Lebesgue integrals.Riemann integral
The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. Let [a,b] be a closed interval of the real line; then a tagged partition of [a,b] is a finite sequence- a = x_0 \le t_1 \le x_1 \le t_2 \le x_2 \le \cdots \le x_ \le t_n \le x_n = b . \,\!
This partitions the interval [a,b] into i
sub-intervals [xi−1, xi], each of which is "tagged" with a
distinguished point ti ∈ [xi−1, xi]. Let
Δi = xi−xi−1 be the width of sub-interval i;
then the mesh of such a tagged partition is the width of the
largest sub-interval formed by the partition,
maxi=1…n Δi. A Riemann sum of a function f with
respect to such a tagged partition is defined as
- \sum_^ f(t_i) \Delta_i ;
- For all ε > 0 there exists δ > 0 such that, for any tagged partition [a,b] with mesh less than δ, we have
-
- \left| S - \sum_^ f(t_i)\Delta_i \right|
Lebesgue integral
The Riemann integral is not defined for a wide range of functions and situations of importance in applications (and of interest in theory). For example, the Riemann integral can easily integrate density to find the mass of a steel beam, but cannot accommodate a steel ball resting on it. This motivates other definitions, under which a broader assortment of functions is integrable . The Lebesgue integral, in particular, achieves great flexibility by directing attention to the weights in the weighted sum.The definition of the Lebesgue integral thus
begins with a measure,
μ. In the simplest case, the Lebesgue
measure μ(A) of an interval A = [a,b] is its width, b
− a, so that the Lebesgue integral agrees with the
(proper) Riemann integral when both exist. In more complicated
cases, the sets being measured can be highly fragmented, with no
continuity and no resemblance to intervals.
To exploit this flexibility, Lebesgue integrals
reverse the approach to the weighted sum. As puts it, "To compute
the Riemann integral of f, one partitions the domain [a,b] into
subintervals", while in the Lebesgue integral, "one is in effect
partitioning the range of f".
One common approach first defines the integral of
the indicator
function of a measurable
set A by:
- \int 1_A d\mu = \mu(A).
- \begin
- \int_E s \, d\mu = \sum_^ a_i \, \mu(A_i \cap E) .
- \int_E f \, d\mu = \sup\left\;
- \begin
- \int_E |f| \, d\mu
- \int_E f \, d\mu = \int_E f^+ \, d\mu - \int_E f^- \, d\mu . \,\!
When the measure space on which the functions are
defined is also a locally
compact topological
space (as is the case with the real numbers R), measures
compatible with the topology in a suitable sense (Radon
measures, of which the Lebesgue measure is an example) and
integral with respect to them can be defined differently, starting
from the integrals of continuous
functions with
compact support. More precisely, the compactly supported
functions form a vector space
that carries a natural topology,
and a (Radon) measure can be defined as any continuous linear
functional on this space; the value of a measure at a compactly
supported function is then also by definition the integral of the
function. One then proceeds to expand the measure (the integral) to
more general functions by continuity, and defines the measure of a
set as the integral of its indicator function. This is the approach
taken by and a certain number of other authors. For details see
Radon measures.
Other integrals
Although the Riemann and Lebesgue integrals are the most important definitions of the integral, a number of others exist, including:- The Riemann-Stieltjes integral, an extension of the Riemann integral.
- The Lebesgue-Stieltjes integral, further developed by Johann Radon, which generalizes the Riemann-Stieltjes and Lebesgue integrals.
- The Daniell integral, which subsumes the Lebesgue integral and Lebesgue-Stieltjes integral without the dependence on measures.
- The Henstock-Kurzweil integral, variously defined by Arnaud Denjoy, Oskar Perron, and (most elegantly, as the gauge integral) Jaroslav Kurzweil, and developed by Ralph Henstock.
- The Itō integral and Stratonovich integral, which define integration with respect to stochastic processes such as Brownian motion.
Properties of integration
Linearity
- The collection of Riemann integrable functions on a closed interval [a, b] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration
-
- f \mapsto \int_a^b f \; dx
- is a linear functional on this vector space. Thus, firstly, the collection of integrable functions is closed under taking linear combinations; and, secondly, the integral of a linear combination is the linear combination of the integrals,
-
- \int_a^b (\alpha f + \beta g)(x) \, dx = \alpha \int_a^b f(x) \,dx + \beta \int_a^b g(x) \, dx. \,
- Similarly, the set of real-valued Lebesgue integrable functions on a given measure space E with measure μ is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral
-
- f\mapsto \int_E f d\mu
- is a linear functional on this vector space, so that
-
- \int_E (\alpha f + \beta g) \, d\mu = \alpha \int_E f \, d\mu + \beta \int_E g \, d\mu.
- More generally, consider the vector space of all measurable functions on a measure space (E,μ), taking values in a locally compact complete topological vector space V over a locally compact topological field K, f : E → V. Then one may define an abstract integration map assigning to each function f an element of V or the symbol ∞,
-
- f\mapsto\int_E f d\mu, \,
- that is compatible with linear combinations. In this situation the linearity holds for the subspace of functions whose integral is an element of V (i.e. "finite"). The most important special cases arise when K is R, C, or a finite extension of the field Qp of p-adic numbers, and V is a finite-dimensional vector space over K, and when K=C and V is a complex Hilbert space.
Linearity, together with some natural continuity
properties and normalisation for a certain class of "simple"
functions, may be used to give an alternative definition of the
integral. This is the approach of Daniell
for the case of real-valued functions on a set X, generalized by
Nicolas
Bourbaki to functions with values in a locally compact
topological vector space. See for an axiomatic characterisation of
the integral.
Inequalities for integrals
A number of general inequalities hold for
Riemann-integrable functions
defined on a closed and
bounded
interval
[a, b] and can be generalized to other notions of integral
(Lebesgue and Daniell).
- Upper and lower bounds. An integrable function f on [a, b], is necessarily bounded on that interval. Thus there are real numbers m and M so that m ≤ f (x) ≤ M for all x in [a, b]. Since the lower and upper sums of f over [a, b] are therefore bounded by, respectively, m(b − a) and M(b − a), it follows that
-
- m(b - a) \leq \int_a^b f(x) \, dx \leq M(b - a).
- Inequalities between functions. If f(x) ≤ g(x) for each x in [a, b] then each of the upper and lower sums of f is bounded above by the upper and lower sums, respectively, of g. Thus
-
- \int_a^b f(x) \, dx \leq \int_a^b g(x) \, dx.
- This is a generalization of the above inequalities, as M(b − a) is the integral of the constant function with value M over [a, b].
- Subintervals. If [c, d] is a subinterval of [a, b] and f(x) is non-negative for all x, then
-
- \int_c^d f(x) \, dx \leq \int_a^b f(x) \, dx.
- Products and absolute values of functions. If f and g are two functions then we may consider their pointwise products and powers, and absolute values:
- If f is Riemann-integrable on [a, b] then the same is true for |f|, and
-
- \left| \int_a^b f(x) \, dx \right| \leq \int_a^b | f(x) | \, dx.
- Moreover, if f and g are both Riemann-integrable then f 2, g 2, and fg are also Riemann-integrable, and
-
- \left( \int_a^b (fg)(x) \, dx \right)^2 \leq \left( \int_a^b f(x)^2 \, dx \right) \left( \int_a^b g(x)^2 \, dx \right).
- This inequality, known as the Cauchy–Schwarz inequality, plays a prominent role in Hilbert space theory, where the left hand side is interpreted as the inner product of two square-integrable functions f and g on the interval [a, b].
- Hölder's inequality. Suppose that p and q are two real numbers, 1 ≤ p, q ≤ ∞ with 1/p + 1/q = 1, and f and g are two Riemann-integrable functions. Then the functions |f|p and |g|q are also integrable and the following Hölder's inequality holds:
- \left|\int f(x)g(x)\,dx\right| \leq
- For p = q = 2, Hölder's inequality becomes the Cauchy–Schwarz inequality.
- Minkowski inequality. Suppose that p ≥ 1 is a real number and f and g are Riemann-integrable functions. Then |f|p, |g|p and |f + g|p are also Riemann integrable and the following Minkowski inequality holds:
- \left(\int \left|f(x)+g(x)\right|^p\,dx \right)^ \leq
- An analogue of this inequality for Lebesgue integral is used in construction of Lp spaces.
Conventions
In this section f is a real-valued
Riemann-integrable function.
The integral
- \int_a^b f(x) \, dx
- Reversing limits of integration. If a > b then define
-
- \int_a^b f(x) \, dx = - \int_b^a f(x) \, dx.
- Integrals over intervals of length zero. If a is a real number then
-
- \int_a^a f(x) \, dx = 0.
The first convention is necessary in
consideration of taking integrals over subintervals of [a, b]; the
second says that an integral taken over a degenerate interval, or a
point,
should be zero. One
reason for the first convention is that the integrability of f on
an interval [a, b] implies that f is integrable on any subinterval
[c, d], but in particular integrals have the property that:
- Additivity of integration on intervals. If c is any element of [a, b], then
-
- \int_a^b f(x) \, dx = \int_a^c f(x) \, dx + \int_c^b f(x) \, dx.
- \begin
Instead of viewing the above as conventions, one
can also adopt the point of view that integration is performed on
oriented
manifolds only. If M is such an oriented m-dimensional
manifold, and M' is the same manifold with opposed orientation and
ω is an m-form, then one has (see below for integration
of differential forms):
- \int_M \omega = - \int_ \omega \,.
Fundamental theorem of calculus
The fundamental theorem of calculus is the
statement that differentiation and integration are inverse
operations: if a continuous
function is first integrated and then differentiated, the
original function is retrieved. An important consequence, sometimes
called the second fundamental theorem of calculus, allows one to
compute integrals by using an antiderivative of the
function to be integrated.
Statements of theorems
- Fundamental theorem of calculus. Let f be a real-valued integrable function defined on a closed interval [a, b]. If F is defined for x in [a, b] by
-
- F(x) = \int_a^x f(t)\, dt.
- then F is continuous on [a, b]. If f is continuous at x in [a, b], then F is differentiable at x, and F ′(x) = f(x).
- Second fundamental theorem of calculus. Let f be a real-valued integrable function defined on a closed interval [a, b]. If F is a function such that F ′(x) = f(x) for all x in [a, b] (that is, F is an antiderivative of f), then
-
- \int_a^b f(t)\, dt = F(b) - F(a).
- Corollary. If f is a continuous function on [a, b], then f is integrable on [a, b], and F, defined by
-
- F(x) = \int_a^x f(t) \, dt
- is an anti-derivative of f on [a, b]. Moreover,
-
- \int_a^b f(t) \, dt = F(b) - F(a).
Extensions
Improper integrals
A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals.If the interval is unbounded, for instance at its
upper end, then the improper integral is the limit as that endpoint
goes to infinity.
- \int_^ f(x)dx = \lim_ \int_^ f(x)dx
- \int_^ f(x)dx = \lim_ \int_^ f(x)dx
That is, the improper integral is the limit
of proper integrals as one endpoint of the interval of integration
approaches either a specified real number,
or ∞, or −∞. In more complicated
cases, limits are required at both endpoints, or at interior
points.
Consider, for example, the function \tfrac
integrated from 0 to ∞ (shown right). At the lower bound,
as x goes to 0 the function goes to ∞, and the upper
bound is itself ∞, though the function goes to 0. Thus
this is a doubly improper integral. Integrated, say, from 1 to 3,
an ordinary Riemann sum suffices to produce a result of \tfrac. To
integrate from 1 to ∞, a Riemann sum is not possible.
However, any finite upper bound, say t (with
t > 1), gives a well-defined result,
\tfrac - 2\arctan \tfrac. This has a finite limit as t goes to
infinity, namely \tfrac. Similarly, the integral from 1⁄3
to 1 allows a Riemann sum as well, coincidentally again producing
\tfrac. Replacing 1⁄3 by an arbitrary positive value s
(with s < 1) is equally safe, giving
-\tfrac + 2\arctan\tfrac. This, too, has a finite limit as s goes
to zero, namely \tfrac. Combining the limits of the two fragments,
the result of this improper integral is
- \begin
It may also happen that an integrand is unbounded
at an interior point, in which case the integral must be split at
that point, and the limit integrals on both sides must exist and
must be bounded. Thus
- \begin
- \int_^ \frac \,\!
Multiple integration
main article Multiple integral Integrals can be taken over regions other than intervals. In general, an integral over a set E of a function f is written:- \int_E f(x) \, dx.
Here x need not be a real number, but can be
another suitable quantity, for instance, a vector
in R3. Fubini's
theorem shows that such integrals can be rewritten as an
iterated
integral. In other words, the integral can be calculated by
integrating one coordinate at a time.
Just as the definite integral of a positive
function of one variable represents the area of the region between the
graph of the function and the x-axis, the double integral of a
positive function of two variables represents the volume of the region between the
surface defined by the function and the plane which contains its
domain.
(The same volume can be obtained via the triple integral
— the integral of a function in three variables
— of the constant function f(x, y, z) = 1 over the
above-mentioned region between the surface and the plane.) If the
number of variables is higher, then the integral represents a
hypervolume,
a volume of a solid of more than three dimensions that cannot be
graphed.
For example, the volume of the parallelepiped of sides 4
× 6 × 5 may be obtained in two ways:
- By the double integral
-
- \iint_D 5 \ dx\, dy
- of the function f(x, y) = 5 calculated in the region D in the xy-plane which is the base of the parallelepiped.
- By the triple integral
-
- \iiint_\mathrm 1 \, dx\, dy\, dz
- of the constant function 1 calculated on the parallelepiped itself.
Because it is impossible to calculate the
antiderivative of
a function of more than one variable, indefinite multiple integrals
do not exist, so such integrals are all definite.
Line integrals
The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with vector fields.A line integral (sometimes called a path
integral) is an integral where the function
to be integrated is evaluated along a curve. Various different line
integrals are in use. In the case of a closed curve it is also
called a contour integral.
The function to be integrated may be a scalar field
or a vector
field. The value of the line integral is the sum of values of
the field at all points on the curve, weighted by some scalar
function on the curve (commonly arc length or,
for a vector field, the scalar
product of the vector field with a differential
vector in the curve). This weighting distinguishes the line
integral from simpler integrals defined on intervals.
Many simple formulas in physics have natural continuous analogs in
terms of line integrals; for example, the fact that work is
equal to force multiplied
by distance may be expressed (in terms of vector quantities) as:
- W=\vec F\cdot\vec d;
- W=\int_C \vec F\cdot d\vec s;
Surface integrals
A surface integral is a definite integral taken over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums.For an example of applications of surface
integrals, consider a vector field v on a surface S; that is, for
each point x in S, v(x) is a vector. Imagine that we have a fluid
flowing through S, such that v(x) determines the velocity of the
fluid at x. The flux is
defined as the quantity of fluid flowing through S in unit amount
of time. To find the flux, we need to take the dot product
of v with the unit surface
normal to S at each point, which will give us a scalar field,
which we integrate over the surface:
- \int_S \cdot \,d.
Integrals of differential forms
A differential
form is a mathematical concept in the fields of multivariable
calculus, differential
topology and tensors. The modern
notation for the differential form, as well as the idea of the
differential forms as being the wedge
products of exterior
derivatives forming an exterior
algebra, was introduced by Élie
Cartan.
We initially work in an open set in Rn.
A 0-form is defined to be a smooth
function f. When we integrate a function
f over an m-dimensional subspace S of Rn,
we write it as
- \int_S f\,dx^1 \cdots dx^m.
(The superscripts are indices, not exponents.) We
can consider dx1 through dxn to be formal objects themselves,
rather than tags appended to make integrals look like Riemann sums.
Alternatively, we can view them as covectors, and
thus a measure
of "density" (hence integrable in a general sense). We call the
dx1, …,dxn basic 1-forms.
We define the wedge
product, "∧", a bilinear "multiplication" operator on
these elements, with the alternating property that
- dx^a \wedge dx^a = 0 \,\!
for all indices a. Note that alternation along
with linearity implies dxb∧dxa = −dxa∧dxb.
This also ensures that the result of the wedge product has an
orientation.
We define the set of all these products to be
basic 2-forms, and similarly we define the set of products of the
form dxa∧dxb∧dxc to be basic 3-forms. A general
k-form is then a weighted sum of basic k-forms, where the weights
are the smooth functions f. Together these form a vector space
with basic k-forms as the basis vectors, and 0-forms (smooth
functions) as the field of scalars. The wedge product then extends
to k-forms in the natural way. Over Rn at most n covectors can be
linearly independent, thus a k-form with
k > n will always be zero, by the
alternating property.
In addition to the wedge product, there is also
the exterior
derivative operator d. This operator maps k-forms to
(k+1)-forms. For a k-form ω = f dxa over Rn, we define
the action of d by:
- = \sum_^n \frac dx^i \wedge dx^a.
with extension to general k-forms occurring
linearly.
This more general approach allows for a more
natural coordinate-free approach to integration on manifolds. It also allows for a
natural generalisation of the
fundamental theorem of calculus, called Stokes'
theorem, which we may state as
- \int_ \omega = \int_ \omega \,\!
where ω is a general k-form, and
∂Ω denotes the boundary
of the region Ω. Thus in the case that ω is a
0-form and Ω is a closed interval of the real line, this
reduces to the
fundamental theorem of calculus. In the case that ω
is a 1-form and Ω is a 2-dimensional region in the plane,
the theorem reduces to Green's
theorem. Similarly, using 2-forms, and 3-forms and Hodge duality,
we can arrive at Stokes'
theorem and the divergence
theorem. In this way we can see that differential forms provide
a powerful unifying view of integration.
Methods and applications
Computing integrals
The most basic technique for computing integrals of one real variable is based on the fundamental theorem of calculus. It proceeds like this:- Choose a function f(x) and an interval [a, b].
- Find an antiderivative of f, that is, a function F such that F' = f.
- By the fundamental theorem of calculus, provided the integrand
and integral have no singularities
on the path of integration,
- \int_a^b f(x)\,dx = F(b)-F(a).
- Therefore the value of the integral is F(b) − F(a).
Note that the integral is not actually the
antiderivative, but the fundamental theorem allows us to use
antiderivatives to evaluate definite integrals.
The difficult step is often finding an
antiderivative of f. It is rarely possible to glance at a function
and write down its antiderivative. More often, it is necessary to
use one of the many techniques that have been developed to evaluate
integrals. Most of these techniques rewrite one integral as a
different one which is hopefully more tractable. Techniques
include:
Even if these techniques fail, it may still be
possible to evaluate a given integral. The next most common
technique is residue
calculus, whilst for nonelementary
integrals Taylor
series can sometimes be used to find the antiderivative. There
are also many less common ways of calculating definite integrals;
for instance, Parseval's
identity can be used to transform an integral over a
rectangular region into an infinite sum. Occasionally, an integral
can be evaluated by a trick; for an example of this, see Gaussian
integral.
Computations of volumes of solids
of revolution can usually be done with disk
integration or shell
integration.
Specific results which have been worked out by
various techniques are collected in the list of
integrals.
Symbolic algorithms
Many problems in mathematics, physics, and
engineering involve integration where an explicit formula for the
integral is desired. Extensive tables
of integrals have been compiled and published over the years
for this purpose. With the spread of computers, many professionals,
educators, and students have turned to computer
algebra systems that are specifically designed to perform
difficult or tedious tasks, including integration. Symbolic
integration presents a special challenge in the development of such
systems.
A major mathematical difficulty in symbolic
integration is that in many cases, a closed formula for the
antiderivative of a rather innocently looking function simply does
not exist. For instance, it is known that the antiderivatives of
the functions exp ( x2), xx and sin x /x cannot
be expressed in the closed form involving only rational
and exponential
functions, logarithm,
trigonometric
and
inverse trigonometric functions, and the operations of
multiplication and composition; in other words, none of the three
given functions is integrable in elementary
functions. Differential
Galois theory provides general criteria that allow one to
determine whether the antiderivative of an elementary function is
elementary. Unfortunately, it turns out that functions with closed
expressions of antiderivatives are the exception rather than the
rule. Consequently, computerized algebra systems have no hope of
being able to find an antiderivative for a randomly constructed
elementary function. On the positive side, if the 'building blocks'
for antiderivatives are fixed in advance, it may be still be
possible to decide whether the antiderivative of a given function
can be expressed using these blocks and operations of
multiplication and composition, and to find the symbolic answer
whenever it exists. The Risch
algorithm, implemented in Mathematica and
other computer
algebra systems, does just that for functions and
antiderivatives built from rational functions, radicals,
logarithm, and exponential functions.
Some special integrands occur often enough to
warrant special study. In particular, it may be useful to have, in
the set of antiderivatives, the special
functions of physics
(like the
Legendre functions, the hypergeometric
function, the Gamma
function and so on). Extending the Risch-Norman algorithm so
that it includes these functions is possible but challenging.
Most humans are not able to integrate such
general formulae, so in a sense computers are more skilled at
integrating highly complicated formulae. Very complex formulae are
unlikely to have closed-form antiderivatives, so how much of an
advantage does this present is a philosophical question that is
open for debate.
Numerical quadrature
The integrals encountered in a basic calculus
course are deliberately chosen for simplicity; those found in real
applications are not always so accommodating. Some integrals cannot
be found exactly, some require special functions which themselves
are a challenge to compute, and others are so complex that finding
the exact answer is too slow. This motivates the study and
application of numerical methods for approximating integrals, which
today use floating point
arithmetic on digital electronic computers. Many of the ideas
arose much earlier, for hand calculations; but the speed of
general-purpose computers like the ENIAC created a need
for improvements.
The goals of numerical integration are accuracy,
reliability, efficiency, and generality. Sophisticated methods can
vastly outperform a naive method by all four measures (; ; ).
Consider, for example, the integral
- \int_^ \tfrac15 \left( \tfrac(322 + 3 x (98 + x (37 + x))) - 24 \frac \right) dx ,
A better approach replaces the horizontal tops of
the rectangles with slanted tops touching the function at the ends
of each piece. This trapezium
rule is almost as easy to calculate; it sums all 17 function
values, but weights the first and last by one half, and again
multiplies by the step width. This immediately improves the
approximation to 3.76925, which is noticeably more accurate.
Furthermore, only 210 pieces are needed to achieve 3.76000,
substantially less computation than the rectangle method for
comparable accuracy.
Romberg's
method builds on the trapezoid method to great effect. First,
the step lengths are halved incrementally, giving trapezoid
approximations denoted by T(h0), T(h1), and so on, where hk+1 is
half of hk. For each new step size, only half the new function
values need to be computed; the others carry over from the previous
size (as shown in the table above). But the really powerful idea is
to interpolate a
polynomial through the approximations, and extrapolate to T(0).
With this method a numerically exact answer here requires only four
pieces (five function values)! The Lagrange
polynomial interpolating k=0…2 = is 3.76+0.148h2,
producing the extrapolated value 3.76 at h = 0.
Gaussian
quadrature often requires noticeably less work for superior
accuracy. In this example, it can compute the function values at
just two x positions, ±2⁄√3, then double each
value and sum to get the numerically exact answer. The explanation
for this dramatic success lies in error analysis, and a little
luck. An n-point Gaussian method is exact for polynomials of degree
up to 2n−1. The function in this example is a degree 3 polynomial,
plus a term that cancels because the chosen endpoints are symmetric
around zero. (Cancellation also benefits the Romberg method.)
Shifting the range left a little, so the integral
is from −2.25 to 1.75, removes the symmetry. Nevertheless, the
trapezoid method is rather slow, the polynomial interpolation
method of Romberg is acceptable, and the Gaussian method requires
the least work — if the number of points is known in advance. As
well, rational interpolation can use the same trapezoid evaluations
as the Romberg method to greater effect.
In practice, each method must use extra
evaluations to ensure an error bound on an unknown function; this
tends to offset some of the advantage of the pure Gaussian method,
and motivates the popular Gauss–Kronrod hybrid. Symmetry can still
be exploited by splitting this integral into two ranges, from −2.25
to −1.75 (no symmetry), and from −1.75 to 1.75 (symmetry). More
broadly, adaptive
quadrature partitions a range into pieces based on function
properties, so that data points are concentrated where they are
needed most.
This brief introduction omits higher-dimensional
integrals (for example, area and volume calculations), where
alternatives such as Monte
Carlo integration have great importance.
A calculus text is no substitute for numerical
analysis, but the reverse is also true. Even the best adaptive
numerical code sometimes requires a user to help with the more
demanding integrals. For example, improper integrals may require a
change of variable or methods that can avoid infinite function
values; and known properties like symmetry and periodicity may
provide critical leverage.
See also
portalpar Mathematics- Lists of integrals - integrals of the most common functions.
- Multiple integral
- Antiderivative
- Numerical integration
- Integral equation
- Riemann integral
- Riemann sum
- Differentiation under the integral sign
- Product integral
References
- Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra
- Integration I . In particular chapters III and IV.
- The History of Mathematics: An Introduction
- A History Of Mathematical Notations Volume II
- Numerical Methods in Scientific Computing
- Real Analysis: Modern Techniques and Their Applications
- Théorie analytique de la chaleur Available in translation as The analytical theory of heat
- The Works of Archimedes (Originally published by Cambridge University Press, 1897, based on J. L. Heiberg's Greek version.)
- Integration in abstract spaces
- Numerical Methods and Software
- Der Briefwechsel von Gottfried Wilhelm Leibniz mit Mathematikern. Erster Band ">http://name.umdl.umich.edu/AAX2762.0001.001}}
- Earliest Uses of Symbols of Calculus
- A history of the calculus
- Real and Complex Analysis
- Theory of the integral
- Introduction to Numerical Analysis .
- Arabic mathematical notation ">http://www.w3.org/TR/arabic-math/}}
External links
- The Integrator by Wolfram Research
- Function Calculator from WIMS
- Mathematical Assistant on Web online calculation of integrals, allows to integrate in small steps (includes also hints for next step which cover techniques like by parts, substitution, partial fractions, application of formulas and others, powered by Maxima_%28software%29)
- P.S. Wang, Evaluation of Definite Integrals by Symbolic Manipulation (1972) - a cookbook of definite integral techniques
- Definite Integrals
- Online Integral Calculator
Online books
- Keisler, H. Jerome, Elementary Calculus: An Approach Using Infinitesimals, University of Wisconsin
- Stroyan, K.D., A Brief Introduction to Infinitesimal Calculus, University of Iowa
- Mauch, Sean, Sean's Applied Math Book, CIT, an online textbook that includes a complete introduction to calculus
- Crowell, Benjamin, Calculus, Fullerton College, an online textbook
- Garrett, Paul, Notes on First-Year Calculus
- Hussain, Faraz, Understanding Calculus, an online textbook
- Kowalk, W.P., Integration Theory, University of Oldenburg. A new concept to an old problem. Online textbook
- Sloughter, Dan, Difference Equations to Differential Equations, an introduction to calculus
- Wikibook of Calculus
- Numerical Methods of Integration at Holistic Numerical Methods Institute
integral in Arabic: تكامل
integral in Catalan: Integració
integral in Czech: Integrál
integral in Danish: Integralregning
integral in German: Integralrechnung
integral in Spanish: Integración
integral in Esperanto: Integralo
integral in Basque: Integral
integral in Persian: انتگرال
integral in French: Intégrale
integral in Simple English: Integral
integral in Croatian: Integral
integral in Korean: 적분
integral in Indonesian: Integral
integral in Icelandic: Heildun
integral in Italian: Integrale
integral in Hebrew: אינטגרל
integral in Georgian: ინტეგრალი
integral in Latvian: Integrālis
integral in Lithuanian: Apibrėžtinis
integralas
integral in Hungarian: Integrálszámítás
integral in Macedonian: Интеграл
integral in Dutch: Integraalrekening
integral in Japanese: 積分
integral in Central Khmer: អាំងតេក្រាល
integral in Norwegian: Integral
(matematikk)
integral in Norwegian Nynorsk: Integral
integral in Uighur: ئىنتېگرال
integral in Polish: Całka
integral in Portuguese: Integral
integral in Romanian: Integrală
integral in Russian: Интеграл
integral in Albanian: Integrali
integral in Serbian: Интеграл
integral in Finnish: Integraali
integral in Swedish: Integral
integral in Slovenian: Integral
integral in Thai: ปริพันธ์
integral in Ukrainian: Інтегрування
integral in Vietnamese: Tích phân
integral in Turkish: İntegral
integral in Chinese: 积分
Synonyms, Antonyms and Related Words
a certain, aggregate, algorismic, algorithmic, aliquot, all, all-embracing, all-inclusive,
an, any, any one, atomic, basic, cardinal, complete, component, composite, comprehensive, constituent, decimal, differential, digital, either, elemental, elementary, entire, entity, essential, even, exclusive, exhaustive, exponential, figural, figurate, figurative, finite, formative, fractional, full, fundamental, gross, holistic, imaginary, impair, impossible, inclusive, individual, indivisible, infinite, intact, integrant, integrate, integrated, intrinsic, irrational, irreducible, logarithmic, logometric, lone, monadic, monistic, negative, numeral, numerary, numerative, numeric, odd, omnibus, one, one and indivisible, ordinal, pair, perfect, positive, possible, prime, radical, rational, real, reciprocal, rolled into one,
simple, single, singular, sole, solid, solitary, sound, submultiple, sum, surd, system, total, totality, transcendental, unanalyzable, undivided, unified, uniform, unique, unitary, united, universal, unqualified, utter, whole