Skip to main content

Mathematics for Artificial Intelligence : Numerical Methods

 A simplified guide on how to prep up on Mathematics for Artificial Intelligence, Machine Learning and Data Science: Numerical Methods (Important Pointers only)


Module VII : Numerical Methods

 I. Bisection Method.

A numerical technique for solving equations of the form f(x)=0. It is a type of root-finding method that repeatedly narrows down an interval where a root of the function exists.

Steps:

  1. Choose the initial interval [a,b][a, b]: Select two points aa and bb such that f(a)f(a) and f(b)f(b) have opposite signs. This indicates that there is at least one root in the interval [a,b][a, b].

  2. Compute the midpoint cc: Calculate the midpoint of the interval:

    c=a+b2c = \frac{a + b}{2}
  3. Evaluate the function at the midpoint: Compute f(c)f(c).

  4. Determine the subinterval:

    • If f(a)f(c)<0, the root lies in the interval [a,c][a, c]. Set b=c.
    • If f(b)f(c)<0f(b) \cdot f(c) < 0, the root lies in the interval [c,b][c, b]. Set a=ca = c.
    • If f(c)=0, then cc is the root, and the method stops.
  5. Check for convergence: Repeat steps 2-4 until the interval [a,b][a, b] is sufficiently small (i.e., ba<ϵ|b - a| < \epsilon for some tolerance ϵ\epsilon), or until the value of f(c)f(c) is close enough to zero.

  6. Output the result: The midpoint cc is an approximation of the root.

 Eg:  To find a root of the function f(x)=x24.

  1. Choose the initial interval: [a,b]=[1,3][a, b] = [1, 3]

    • f(1)=124=3
    • f(3)=324=5f(3) = 3^2 - 4 = 5
    • Since f(1)f(1) and have opposite signs, there is a root in the interval [1,3][1, 3].
  2. Compute the midpoint:

    c=1+32=2c = \frac{1 + 3}{2} = 2
  3. Evaluate the function at the midpoint:

    • f(2)=224=0
  4. Since f(2)=0f(2) = 0, the root is exactly at c=2.

For functions where the root is not exactly at the midpoint, the method would continue iterating until the desired precision is achieved.

 

II. Trapezoidal Rule.

 A numerical method used to approximate the definite integral of a function.

The trapezoidal rule approximates the integral of a function f(x)f(x) over an interval [a,b][a, b] using the formula:

abf(x)dxba2[f(a)+f(b)]\int_a^b f(x) \, dx \approx \frac{b - a}{2} \left[ f(a) + f(b) \right]

For better accuracy, the interval [a,b][a, b] can be divided into nn smaller subintervals of equal width. If the interval [a,b][a, b] is divided into nn subintervals, each of width h=banh = \frac{b - a}{n}, the composite trapezoidal rule is used:

abf(x)dxh2[f(x0)+2i=1n1f(xi)+f(xn)]\int_a^b f(x) \, dx \approx \frac{h}{2} \left[ f(x_0) + 2 \sum_{i=1}^{n-1} f(x_i) + f(x_n) \right]

where x0=a, xn=bx_n = b, and xi=a+ih for i=1,2,,n1i = 1, 2, \ldots, n-1.

Steps

  1. Divide the interval [a,b]: Divide the interval into n equal subintervals. The width of each subinterval is h=ban.

  2. Calculate the endpoints: Compute the function values at the endpoints of each subinterval. These points are x0,x1,,xn, where xi=a+ih.

  3. Apply the trapezoidal rule formula: Sum the function values, multiplying the endpoints by 12 and the interior points by 2.

  4. Calculate the approximation: Multiply the result by h2 to get the final approximation of the integral.

Eg: Approximate the integral of f(x)=exf(x) = e^x over the interval [0,1][0, 1] using the trapezoidal rule with n=4n = 4 subintervals.

  1. Divide the interval [0,1][0, 1]:

    h=104=14h = \frac{1 - 0}{4} = \frac{1}{4
  2. Calculate the endpoints:

    x0=0,x1=14,x2=12,x3=34,x4=1x_0 = 0, \quad x_1 = \frac{1}{4}, \quad x_2 = \frac{1}{2}, \quad x_3 = \frac{3}{4}, \quad x_4 = 1
  3. Evaluate the function at the points:

    f(x0)=e0=1,f(x1)=e1/41.284,f(x2)=e1/21.649f(x_0) = e^0 = 1, \quad f(x_1) = e^{1/4} \approx 1.284, \quad f(x_2) = e^{1/2} \approx 1.649  f(x3)=e3/42.117,f(x4)=e12.718f(x_3) = e^{3/4} \approx 2.117, \quad f(x_4) = e^1 \approx 2.718
  4. Apply the trapezoidal rule formula:

    01exdxh2[f(x0)+2i=13f(xi)+f(x4)]\int_0^1 e^x \, dx \approx \frac{h}{2} \left[ f(x_0) + 2 \sum_{i=1}^{3} f(x_i) + f(x_4) \right]  =1/42[f(0)+2(f(14)+f(12)+f(34))+f(1)]= \frac{1/4}{2} \left[ f(0) + 2 \left( f\left(\frac{1}{4}\right) + f\left(\frac{1}{2}\right) + f\left(\frac{3}{4}\right) \right) + f(1) \right]  =18[1+2(1.284+1.649+2.117)+2.718]= \frac{1}{8} \left[ 1 + 2 \left( 1.284 + 1.649 + 2.117 \right) + 2.718 \right]
  5. Calculate the sum:

    =18[1+2(1.284+1.649+2.117)+2.718]=18[1+2×5.05+2.718] = 18[1+10.1+2.718]=18[13.818]=1.72725= \frac{1}{8} \left[ 1 + 2 \left( 1.284 + 1.649 + 2.117 \right) + 2.718 \right] = \frac{1}{8} \left[ 1 + 2 \times 5.05 + 2.718 \right] = \frac{1}{8} \left[ 1 + 10.1 + 2.718 \right] = \frac{1}{8} \left[ 13.818 \right] = 1.72725

So, the approximate value of the integral 01exdx using the trapezoidal rule with 4 subintervals is approximately 1.727251.72725.

The exact value of the integral 01exdx is e11.71828e - 1 \approx 1.71828. The trapezoidal rule gives a reasonably close approximation, which can be improved by increasing the number of subintervals nn.

 

III. Secant Method.

 The secant method is an iterative numerical technique used to find roots of a function f(x)=0. Unlike the bisection method, which requires the function values to have opposite signs at the endpoints of an interval, the secant method uses two initial approximations to generate a sequence of improving approximations to the root.

The secant method approximates the root by using the following iterative formula:

xn+1=xnf(xn)xnxn1f(xn)f(xn1)where:
  • xn and xn1 are the current and previous approximations, respectively.
  • f(xn) and f(xn1) are the function values at these approximations.

Steps

  1. Choose initial approximations: Select two initial guesses x0x_0 and x1x_1 close to the root.

  2. Iterate using the secant formula: Use the secant formula to compute the next approximation xn+1x_{n+1}

  3. Check for convergence: Repeat the iteration until the difference between successive approximations is smaller than a predetermined tolerance ϵ\epsilon or until the function value f(xn+1)f(x_{n+1}) is close to zero.

  4. Output the result: The final approximation xn+1x_{n+1} is taken as the root.

 Eg:  Find a root of the function f(x)=x22 using the secant method.

  1. Choose initial approximations: x0=1 and x1=2.

  2. First iteration:

    x2=x1f(x1)x1x0f(x1)f(x0)x_2 = x_1 - f(x_1) \frac{x_1 - x_0}{f(x_1) - f(x_0)} f(x0)=122=1,f(x1)=222=2f(x_0) = 1^2 - 2 = -1, \quad f(x_1) = 2^2 - 2 = 2
  3.   x2=22212(1)=2213=223=431.3333x_2 = 2 - 2 \frac{2 - 1}{2 - (-1)} = 2 - 2 \frac{1}{3} = 2 - \frac{2}{3} = \frac{4}{3} \approx 1.3333
  4. Second iteration:

    x3=x2f(x2)x2x1f(x2)f(x1)x_3 = x_2 - f(x_2) \frac{x_2 - x_1}{f(x_2) - f(x_1)} f(x2)=(43)22=1692=169189=29f(x_2) = \left(\frac{4}{3}\right)^2 - 2 = \frac{16}{9} - 2 = \frac{16}{9} - \frac{18}{9} = -\frac{2}{9} x3=43(29)432292x_3 = \frac{4}{3} - \left( -\frac{2}{9} \right) \frac{\frac{4}{3} - 2}{-\frac{2}{9} - 2} =43(29)436329189= \frac{4}{3} - \left( -\frac{2}{9} \right) \frac{\frac{4}{3} - \frac{6}{3}}{-\frac{2}{9} - \frac{18}{9}} =43(29)23209= \frac{4}{3} - \left( -\frac{2}{9} \right) \frac{-\frac{2}{3}}{-\frac{20}{9}} =43(29)220= \frac{4}{3} - \left( -\frac{2}{9} \right) \cdot \frac{-2}{-20} =4329110= \frac{4}{3} - \frac{2}{9} \cdot \frac{1}{10} =43+145= \frac{4}{3} + \frac{1}{45} =6045+145= \frac{60}{45} + \frac{1}{45} =61451.3556= \frac{61}{45} \approx 1.3556
  5. Further iterations: Repeat the above steps until the difference between successive approximations is less than the tolerance ϵ\epsilon.

 

IV. Newton-Raphson Method.

An iterative numerical technique used to find approximations to the roots of a real-valued function f(x)=0. It is known for its fast convergence properties, especially when the initial guess is close to the actual root.

The Newton-Raphson iteration formula is given by:

xn+1=xnf(xn)f(xn)x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}

where:

  • xnx_n is the current approximation.
  • f(xn)f(x_n) is the value of the function at xnx_n.
  • f(xn)f'(x_n) is the value of the derivative of the function at xn.

Steps

  1. Choose an initial approximation: Select an initial guess x0x_0 close to the root.

  2. Iterate using the Newton-Raphson formula: Use the formula to compute the next approximation xn+1.

  3. Check for convergence: Repeat the iteration until the difference between successive approximations is smaller than a predetermined tolerance ϵ\epsilon or until the function value f(xn+1)f(x_{n+1}) is close to zero.

  4. Output the result: The final approximation xn+1x_{n+1} is taken as the root.

 Eg: Find a root of the function f(x)=x22f(x) = x^2 - 2 using the Newton-Raphson method.

  1. Choose an initial approximation: x0=1.

  2. First iteration:

    x1=x0f(x0)f(x0)x_1 = x_0 - \frac{f(x_0)}{f'(x_0)} f(x0)=122=1,f(x0)=2x0=2×1=2f(x_0) = 1^2 - 2 = -1, \quad f'(x_0) = 2x_0 = 2 \times 1 = 2  x1=112=1+12=1.5x_1 = 1 - \frac{-1}{2} = 1 + \frac{1}{2} = 1.5
  3. Second iteration:

    x2=x1f(x1)f(x1)x_2 = x_1 - \frac{f(x_1)}{f'(x_1)} f(x1)=1.522=2.252=0.25,f(x1)=2x1=2×1.5=3f(x_1) = 1.5^2 - 2 = 2.25 - 2 = 0.25, \quad f'(x_1) = 2x_1 = 2 \times 1.5 = 3  x2=1.50.253=1.50.0833=1.4167x_2 = 1.5 - \frac{0.25}{3} = 1.5 - 0.0833 = 1.4167
  4. Third iteration:

    x3=x2f(x2)f(x2)x_3 = x_2 - \frac{f(x_2)}{f'(x_2)} f(x2)=1.4167220.0069,f(x2)=2×1.4167=2.8334f(x_2) = 1.4167^2 - 2 \approx 0.0069, \quad f'(x_2) = 2 \times 1.4167 = 2.8334  x3=1.41670.00692.83341.41670.0024=1.4143x_3 = 1.4167 - \frac{0.0069}{2.8334} \approx 1.4167 - 0.0024 = 1.4143
  5. Further iterations: Continue iterating until the difference between successive approximations is less than the tolerance ϵ\epsilon.

 

V. Numerical Stability  and Error Analysis.

1. Numerical Stability

Numerical stability refers to how errors are propagated by an algorithm. An algorithm is numerically stable if small changes in the input or intermediate calculations lead to small changes in the output. Conversely, if small changes in the input result in large changes in the output, the algorithm is numerically unstable.

Types of Stability:

  1. Forward Stability: An algorithm is forward stable if the computed solution is close to the exact solution of the given problem. This implies that the errors in the output are proportional to the errors in the input.

  2. Backward Stability: An algorithm is backward stable if the computed solution is the exact solution to a slightly perturbed version of the original problem. This means the algorithm produces results that are accurate for some nearby problem.

  3. Mixed Stability: Combines aspects of both forward and backward stability, considering both input and output errors.

2. Error Analysis

Error analysis is the study of the types, sources and propagation of errors in numerical computations. It helps in understanding the accuracy and precision of numerical solutions.

Types of Errors:

  1. Round-off Error: Errors that occur because of the finite precision with which computers represent real numbers. For example, floating-point arithmetic can introduce small errors in calculations due to truncation or rounding.

  2. Truncation Error: Errors that result from approximating a mathematical procedure. For example, truncating an infinite series to a finite number of terms introduces a truncation error.

  3. Approximation Error: Errors that arise when a mathematical model or method approximates a physical process or another mathematical model. This includes discretization errors in methods like finite differences or finite elements.

3. Error Propagation:

Error propagation studies how errors in input data or intermediate steps affect the final result. It is crucial for understanding and mitigating the impact of errors in numerical algorithms.

 4. Condition Number:

The condition number of a problem measures its sensitivity to changes in input. A problem with a high condition number is ill-conditioned, meaning small changes in input can cause large changes in output. Conversely, a problem with a low condition number is well-conditioned.

For example, the condition number of a matrix AA in solving the linear system Ax=bAx = b is given by κ(A)=AA1\kappa(A) = \|A\| \cdot \|A^{-1}\|. If κ(A)\kappa(A) is large, the system is ill-conditioned.

 5. Mitigating Errors

  1. Improving Precision: Use higher precision arithmetic (e.g., double precision instead of single precision).
  2. Algorithm Choice: Choose stable algorithms (e.g., using LU decomposition with pivoting instead of Gaussian elimination without pivoting).
  3. Conditioning: Precondition the problem (e.g., scaling the input data to improve the condition number).
  4. Error Estimation: Use a posteriori error estimates to assess the accuracy of the computed solution.

 Eg:  Error Propagation in Polynomial Evaluation

Evaluating a polynomial p(x)=a0+a1x+a2x2++anxn using Horner's method:

p(x)=a0+x(a1+x(a2++x(an1+xan)))p(x) = a_0 + x(a_1 + x(a_2 + \cdots + x(a_{n-1} + x \cdot a_n) \cdots))

Horner's method is more stable than the naive approach because it minimizes the number of operations and thus the potential round-off errors.

 

VI. Euler's Method for ODEs.

 Euler's method is a simple and widely used numerical technique for solving ordinary differential equations (ODEs).

Consider an initial value problem of the form:

dydx=f(x,y),y(x0)=y0\frac{dy}{dx} = f(x, y), \quad y(x_0) = y_0

where f(x,y)f(x, y) is a given function, y(x0)=y0 is the initial condition, and we seek the value of yy at x=x1x = x_1

Euler's method uses the following iterative formula to approximate the solution:

yn+1=yn+hf(xn,yn)y_{n+1} = y_n + h f(x_n, y_n)

where:

  • yny_n is the approximation of yy at xnx_n.
  • xn+1=xn+hx_{n+1} = x_n + h is the next value of xx.
  • hh is the step size.

Steps of Euler's Method

  1. Initialization: Set the initial values x0x_0 and y0y_0, and choose the step size hh.
  2. Iteration: Use the Euler formula to compute yn+1y_{n+1} from yny_n for n=0,1,2,n = 0, 1, 2, \ldots until the desired interval is covered.
  3. Output: The values (xn,yn)(x_n, y_n) give the approximate solution to the ODE at discrete points.

 Eg : Use Euler's method to solve the initial value problem:

dydx=x+y,y(0)=1\frac{dy}{dx} = x + y, \quad y(0) = 1

on the interval [0,1][0, 1] with a step size of h=0.2.

  1. Initialization:

    x0=0,y0=1,h=0.2x_0 = 0, \quad y_0 = 1, \quad h = 0.2
  2. Iteration:

    • Step 1:

      y1=y0+hf(x0,y0)=1+0.2(0+1)=1+0.2=1.2y_1 = y_0 + h f(x_0, y_0) = 1 + 0.2 (0 + 1) = 1 + 0.2 = 1.2  x1=x0+h=0+0.2=0.2x_1 = x_0 + h = 0 + 0.2 = 0.2
    • Step 2:

      y2=y1+hf(x1,y1)=1.2+0.2(0.2+1.2)=1.2+0.2×1.4=1.2+0.28=1.48y_2 = y_1 + h f(x_1, y_1) = 1.2 + 0.2 (0.2 + 1.2) = 1.2 + 0.2 \times 1.4 = 1.2 + 0.28 = 1.48  x2=x1+h=0.2+0.2=0.4x_2 = x_1 + h = 0.2 + 0.2 = 0.4
    • Step 3:

      y3=y2+hf(x2,y2)=1.48+0.2(0.4+1.48)=1.48+0.2×1.88 =1.48+0.376 =1.856y_3 = y_2 + h f(x_2, y_2) = 1.48 + 0.2 (0.4 + 1.48) = 1.48 + 0.2 \times 1.88 = 1.48 + 0.376 = 1.856  x3=x2+h=0.4+0.2=0.6x_3 = x_2 + h = 0.4 + 0.2 = 0.6
    • Step 4:

      y4=y3+hf(x3,y3)=1.856+0.2(0.6+1.856) =1.856+0.2×2.456 =1.856+0.4912 =2.3472y_4 = y_3 + h f(x_3, y_3) = 1.856 + 0.2 (0.6 + 1.856) = 1.856 + 0.2 \times 2.456 = 1.856 + 0.4912 = 2.3472  x4=x3+h=0.6+0.2=0.8x_4 = x_3 + h = 0.6 + 0.2 = 0.8
    • Step 5:

      y5=y4+hf(x4,y4)=2.3472+0.2(0.8+2.3472)=2.3472+0.2×3.1472 =2.3472+0.62944=2.97664y_5 = y_4 + h f(x_4, y_4) = 2.3472 + 0.2 (0.8 + 2.3472) = 2.3472 + 0.2 \times 3.1472 = 2.3472 + 0.62944 = 2.97664  x5=x4+h=0.8+0.2=1.0x_5 = x_4 + h = 0.8 + 0.2 = 1.0
  3. Output:

The approximate solution at x=1 is y(1)2.97664y(1) \approx 2.97664.

Error and Stability:

Euler's method is simple and easy to implement, but it has limitations:

  • Global Truncation Error: The error accumulates over steps and is proportional to hh. The global error is O(h)O(h), making it less accurate for large hh.
  • Stability: For stiff ODEs, Euler's method can be unstable unless the step size hh is very small.

 

VII. Runge-Kutta Methods.

 The most commonly used Runge-Kutta method is the fourth-order method, often referred to simply as the Runge-Kutta method.

The general form of an ss-stage Runge-Kutta method for solving the initial value problem

dydx=f(x,y),y(x0)=y0\frac{dy}{dx} = f(x, y), \quad y(x_0) = y_0

is given by:

yn+1=yn+hi=1sbikiy_{n+1} = y_n + h \sum_{i=1}^s b_i k_i

where:

ki=f(xn+cih,yn+hj=1i1aijkj)k_i = f\left( x_n + c_i h, y_n + h \sum_{j=1}^{i-1} a_{ij} k_j \right)

for i=1,2,,. The coefficients aija_{ij}, bib_i, and cic_i define a specific Runge-Kutta method and are typically arranged in a Butcher tableau.

 Fourth-Order Runge-Kutta Method (RK4)

The fourth-order Runge-Kutta method is the most popular Runge-Kutta method due to its accuracy and simplicity. It is given by the following formulas:

  1. Compute the intermediate slopes:

    k1=f(xn,yn)k_1 = f(x_n, y_n) k2=f(xn+h2,yn+h2k1)k_2 = f\left(x_n + \frac{h}{2}, y_n + \frac{h}{2} k_1\right) k3=f(xn+h2,yn+h2k2)k_3 = f\left(x_n + \frac{h}{2}, y_n + \frac{h}{2} k_2\right) k4=f(xn+h,yn+hk3)k_4 = f(x_n + h, y_n + h k_3)
  2. Update the solution:

    yn+1=yn+h6(k1+2k2+2k3+k4)y_{n+1} = y_n + \frac{h}{6} (k_1 + 2k_2 + 2k_3 + k_4) 

Eg : Use the fourth-order Runge-Kutta method to solve the initial value problem:

dydx=x+y,y(0)=1\frac{dy}{dx} = x + y, \quad y(0) = 1

on the interval [0,1][0, 1] with a step size of h=0.2h = 0.2.

  1. Initialization:

    x0=0,y0=1,h=0.2x_0 = 0, \quad y_0 = 1, \quad h = 0.2
  2. First iteration:

    • Compute the intermediate slopes:

      k1=f(x0,y0)=0+1=1k_1 = f(x_0, y_0) = 0 + 1 = 1 k2=f(x0+h2,y0+h2k1)=f(0+0.1,1+0.1×1)=f(0.1,1.1)=0.1+1.1=1.2k_2 = f\left(x_0 + \frac{h}{2}, y_0 + \frac{h}{2} k_1\right) = f\left(0 + 0.1, 1 + 0.1 \times 1\right) = f(0.1, 1.1) = 0.1 + 1.1 = 1.2  k3=f(x0+h2,y0+h2k2)=f(0.1,1+0.1×1.2)=f(0.1,1.12)=0.1+1.12=1.22k_3 = f\left(x_0 + \frac{h}{2}, y_0 + \frac{h}{2} k_2\right) = f\left(0.1, 1 + 0.1 \times 1.2\right) = f(0.1, 1.12) = 0.1 + 1.12 = 1.22  k4=f(x0+h,y0+hk3)=f(0.2,1+0.2×1.22)=f(0.2,1.244)=0.2+1.244=1.444k_4 = f(x_0 + h, y_0 + h k_3) = f(0.2, 1 + 0.2 \times 1.22) = f(0.2, 1.244) = 0.2 + 1.244 = 1.444
    • Update the solution:

      y1=y0+h6(k1+2k2+2k3+k4)=1+0.26(1+2×1.2+2×1.22+1.444)y_1 = y_0 + \frac{h}{6} (k_1 + 2k_2 + 2k_3 + k_4) = 1 + \frac{0.2}{6} (1 + 2 \times 1.2 + 2 \times 1.22 + 1.444)  =1+0.26(1+2.4+2.44+1.444)=1+0.26×7.284 = 1+0.2×1.214=1.2428= 1 + \frac{0.2}{6} (1 + 2.4 + 2.44 + 1.444) = 1 + \frac{0.2}{6} \times 7.284 = 1 + 0.2 \times 1.214 = 1.2428

 Continue this process for more iterations to find the solution at desired points.

 

VIII. Simpson's Rule.

 Simpson's rule is a numerical method for approximating the definite integral of a function. Simpson's rule uses parabolic arcs instead of straight lines to approximate the area under a curve.

For a function f(x)f(x) defined on the interval [a,b][a, b], Simpson's rule approximates the integral abf(x)dx as follows:

  1. Simpson's 1/3 Rule:

    abf(x)dxba6[f(a)+4f(a+b2)+f(b)]\int_a^b f(x) \, dx \approx \frac{b - a}{6} \left[ f(a) + 4f\left( \frac{a + b}{2} \right) + f(b) \right]

    This rule requires that the interval [a,b][a, b] is divided into an even number of subintervals, each of equal width hh.

  2. Composite Simpson's 1/3 Rule: For better accuracy, especially over larger intervals, the interval [a,b][a, b] is divided into nn equal subintervals (where nn is even), each of width h=banh = \frac{b - a}{n}. The composite Simpson's rule is then:

    abf(x)dxh3[f(x0)+4i=1,3,5,,n1f(xi)+2i=2,4,6,,n2f(xi)+f(xn)]\int_a^b f(x) \, dx \approx \frac{h}{3} \left[ f(x_0) + 4 \sum_{i=1, 3, 5, \ldots, n-1} f(x_i) + 2 \sum_{i=2, 4, 6, \ldots, n-2} f(x_i) + f(x_n) \right]

    where xi=a+ihx_i = a + ih for i=0,1,,ni = 0, 1, \ldots, n.

Eg: Approximate the integral 02exdx\int_0^2 e^x \, dx using Simpson's rule with n=4 subintervals.

  1. Define the function:

    f(x)=exf(x) = e^x
  2. Set the interval and subinterval width:

    a=0,b=2,n=4,h=ban=204=0.5a = 0, \quad b = 2, \quad n = 4, \quad h = \frac{b - a}{n} = \frac{2 - 0}{4} = 0.5
  3. Compute the function values at the required points:

    x0=0,x1=0.5,x2=1.0,x3=1.5,x4=2.0x_0 = 0, \quad x_1 = 0.5, \quad x_2 = 1.0, \quad x_3 = 1.5, \quad x_4 = 2.0  f(x0)=e0=1f(x_0) = e^0 = 1  f(x1)=e0.51.64872f(x_1) = e^{0.5} \approx 1.64872  f(x2)=e12.71828f(x_2) = e^1 \approx 2.71828  f(x3)=e1.54.48169f(x_3) = e^{1.5} \approx 4.48169  f(x4)=e27.38906f(x_4) = e^2 \approx 7.38906
  4. Apply the composite Simpson's rule formula:

    02exdxh3[f(x0)+4(f(x1)+f(x3))+2f(x2)+f(x4)]\int_0^2 e^x \, dx \approx \frac{h}{3} \left[ f(x_0) + 4 (f(x_1) + f(x_3)) + 2 f(x_2) + f(x_4) \right]  0.53[1+4(1.64872+4.48169)+2(2.71828)+7.38906]\approx \frac{0.5}{3} \left[ 1 + 4(1.64872 + 4.48169) + 2(2.71828) + 7.38906 \right]  0.53[1+46.13041+5.43656+7.38906]\approx \frac{0.5}{3} \left[ 1 + 4 \cdot 6.13041 + 5.43656 + 7.38906 \right] 0.53[1+24.52164+5.43656+7.38906]\approx \frac{0.5}{3} \left[ 1 + 24.52164 + 5.43656 + 7.38906 \right] 0.53[38.34726]\approx \frac{0.5}{3} \left[ 38.34726 \right] 6.39121\approx 6.39121

The exact value of the integral is e216.38906, so the approximation using Simpson's rule is quite accurate.

 

 

 

 

 

 


Popular posts from this blog

Case Study: Reported Rape Cases Analysis

Case Study  : Rape Cases Analysis Country : India Samples used are the reports of rape cases from 2016 to 2021 in Indian states and Union Territories Abstract : Analyzing rape cases reported in India is crucial for understanding patterns, identifying systemic failures and driving policy reforms to ensure justice and safety. With high underreporting and societal stigma, data-driven insights can help reveal gaps in law enforcement, judicial processes and victim support systems. Examining factors such as regional trends, conviction rates and yearly variations aids in developing more effective legal frameworks and prevention strategies. Furthermore, such analysis raises awareness, encourages institutional accountability and empowers advocacy efforts aimed at addressing gender-based violence. A comprehensive approach to studying these cases is essential to creating a safer, legally sound and legitimate society. This study is being carried out with an objective to perform descriptive a...

Everything/Anything as a Service (XaaS)

  "Anything as a Service" or "Everything as a Service."     XaaS, or "Anything as a Service," represents the comprehensive and evolving suite of services and applications delivered to users via the internet. This paradigm encompasses a wide array of cloud-based solutions, transcending traditional boundaries to include software, infrastructure, platforms and more. There are numerous types of XaaS: Software as a service Platform as a service Infrastructure as a service Storage as a service Mobility as a service Database as a service Communications as a service Network as a service  .. and this list goes on by each passing day  Most familiar and known services in Cloud Computing : Software as a service ...

The light weight distro : Alpine

    Ever since its inception in DockerCon in 2017, this light weight Linux distro has been gaining some popularity.  With a light weight ISO image (9 Mb -> Alpine:latest) and the fastest boot time (12 sec), this Linux distribution is doing its own rounds. But why ? Well to begin with, one of its nearest neighbor ISOs weigh almost 77Mb (Ubuntu:latest), as anyone can see that's one huge difference.  Secure, lightweight, fastest boot time, perfect fit for container image s and even for running containers across multiple platforms due to its light weight.. but how does Alpine Linux achieves it all. Lets look into its architecture:  Core Utilities:  Musl libc: Alpine Linux uses musl libc instead of the more common GNU C Library (glibc). Musl is a lightweight, fast and simple implementation of the standard C library, a standards-compliant and optimized lib for static linking and minimal resource usage. Busybox:  BusyBox combines tiny versions of many comm...