Numerical integration techniques offer practical ways to approximate definite integrals. These methods, like the midpoint and trapezoidal rules, divide the area under a curve into smaller shapes, summing them up to estimate the total area.
Error calculations and bounds help assess the accuracy of these approximations. Simpson's rule, a more advanced technique, uses parabolic arcs for improved precision. Understanding when methods over- or underestimate integrals is crucial for selecting the right approach.
Numerical Integration
Midpoint and trapezoidal rule applications
- Midpoint rule divides the interval into equal subintervals and uses the midpoint of each subinterval to calculate the height of rectangles approximating the area under the curve ($\int_0^1 e^x dx$)
- Formula $\int_a^b f(x) dx \approx \Delta x \sum_{i=1}^n f(x_i^)$ where $\Delta x = \frac{b-a}{n}$ is the width of each subinterval and $x_i^ = \frac{x_{i-1} + x_i}{2}$ is the midpoint of each subinterval
- Trapezoidal rule divides the interval into equal subintervals and uses trapezoids formed by connecting the function values at the endpoints of each subinterval to approximate the area under the curve ($\int_0^{\pi} \sin x dx$)
- Formula $\int_a^b f(x) dx \approx \frac{\Delta x}{2} [f(x_0) + 2f(x_1) + 2f(x_2) + \cdots + 2f(x_{n-1}) + f(x_n)]$ where $\Delta x = \frac{b-a}{n}$ is the width of each subinterval
- Both methods are examples of Riemann sum approximations, which partition the interval and sum the areas of simpler shapes
Error calculations in numerical integration
- Absolute error measures the difference between the exact value of the integral and the approximation obtained through numerical integration methods ($\int_0^1 x^2 dx$)
- Formula $E_a = |\text{exact value} - \text{approximation}|$ provides the magnitude of the error
- Relative error expresses the absolute error as a fraction of the exact value of the integral ($\int_1^2 \frac{1}{x} dx$)
- Formula $E_r = \frac{|\text{exact value} - \text{approximation}|}{|\text{exact value}|}$ gives the error as a proportion of the true value
- Midpoint rule error bound $|E_M| \leq \frac{K(b-a)^3}{24n^2}$ where $K$ is the maximum value of $|f''(x)|$ on the interval $[a,b]$ limits the maximum possible error for the midpoint rule approximation ($\int_0^1 \cos x dx$)
- Trapezoidal rule error bound $|E_T| \leq \frac{K(b-a)^3}{12n^2}$ where $K$ is the maximum value of $|f''(x)|$ on the interval $[a,b]$ provides an upper limit for the error in the trapezoidal rule approximation ($\int_0^1 \sqrt{1-x^2} dx$)
- These error bounds help determine the convergence rate of the numerical approximation as the number of subintervals increases
Over- vs underestimation in integration
- Midpoint rule overestimates the integral when the function is concave up ($f''(x) > 0$) on the interval ($\int_0^1 e^x dx$) and underestimates when concave down ($f''(x) < 0$) ($\int_0^1 \ln x dx$)
- Trapezoidal rule underestimates the integral when the function is concave up ($f''(x) > 0$) on the interval ($\int_0^{\pi/2} \sin x dx$) and overestimates when concave down ($f''(x) < 0$) ($\int_1^2 \frac{1}{x^2} dx$)
Simpson's rule for definite integrals
- Simpson's rule approximates the area under a curve by dividing the interval into an even number of subintervals and using parabolas to estimate the area ($\int_0^1 x^3 dx$)
- Formula $\int_a^b f(x) dx \approx \frac{\Delta x}{3} [f(x_0) + 4f(x_1) + 2f(x_2) + 4f(x_3) + \cdots + 2f(x_{n-2}) + 4f(x_{n-1}) + f(x_n)]$ where $\Delta x = \frac{b-a}{n}$ and $n$ is even
- Error bound for Simpson's rule $|E_S| \leq \frac{K(b-a)^5}{180n^4}$ where $K$ is the maximum value of $|f^{(4)}(x)|$ on the interval $[a,b]$ limits the maximum error ($\int_0^{\pi} \cos x dx$)
- Achieving specified accuracy requires increasing the number of subintervals $n$ until the error bound is less than the desired accuracy ($\int_0^1 \sqrt{1+x^2} dx$ with error $< 10^{-4}$)
Advanced numerical integration techniques
- Composite rules apply basic integration methods (like midpoint, trapezoidal, or Simpson's) to smaller subintervals and sum the results for improved accuracy
- Gaussian quadrature methods use specially chosen points and weights to achieve higher accuracy with fewer function evaluations
- Adaptive quadrature algorithms adjust the subinterval sizes based on the function's behavior to optimize the balance between accuracy and computational efficiency