Solving F(x) = G(x) Using Successive Approximation Three Iterations Example

In mathematics, finding solutions to equations is a fundamental task. When dealing with complex equations, analytical solutions may not always be feasible. In such cases, numerical methods provide valuable tools for approximating solutions. One such method is the successive approximation method, also known as the fixed-point iteration method. This article explores the application of the successive approximation method to solve the equation f(x) = g(x), where f(x) and g(x) are given functions. We will delve into the iterative process, discuss the importance of the initial guess, and demonstrate the method's effectiveness through three iterations.

Understanding Successive Approximation

The successive approximation method is an iterative technique used to find the roots of an equation or, more generally, to solve an equation of the form x = h(x). The core idea is to start with an initial guess, x₀, and then iteratively refine this guess using the equation xₙ₊₁ = h(xₙ), where n represents the iteration number. The sequence of approximations, x₀, x₁, x₂, ..., ideally converges to a solution of the equation. The choice of the function h(x) is crucial for the method's convergence. In our case, we have the equation f(x) = g(x), and we need to rearrange it into the form x = h(x). This rearrangement might not be unique, and different rearrangements can lead to different convergence behaviors.

The convergence of the successive approximation method depends on the properties of the function h(x). A sufficient condition for convergence is that h(x) is a contraction mapping in a neighborhood of the solution. This means that there exists a constant L, where 0 ≤ L < 1, such that |h'(x)| ≤ L for all x in the neighborhood. The constant L is called the Lipschitz constant, and it essentially measures the rate at which the function h(x) changes. If the condition |h'(x)| ≤ L < 1 is satisfied, the successive approximation method is guaranteed to converge to a unique solution within the neighborhood. However, if this condition is not met, the method may still converge, but there is no guarantee.

To effectively apply the successive approximation method, careful consideration must be given to the choice of the initial guess, x₀. The initial guess should be chosen such that it is sufficiently close to the actual solution. If the initial guess is too far from the solution, the iterations may diverge, or they may converge to a different solution. In some cases, prior knowledge of the function's behavior or the equation's properties can help guide the selection of a suitable initial guess. Graphical methods or other numerical techniques can also be used to obtain an approximate solution, which can then be used as the initial guess for the successive approximation method.

Setting up the Equation for Iteration

Our given equations are:

f(x) = (x³ + 3x + 2) / (x + 8)
g(x) = (x - 1) / x

We want to solve f(x) = g(x). To apply the successive approximation method, we first need to rewrite the equation in the form x = h(x). Let's set f(x) equal to g(x):

(x³ + 3x + 2) / (x + 8) = (x - 1) / x

Now, we need to manipulate this equation to isolate x on one side. This can be done in several ways, leading to different forms of h(x). One possible rearrangement is as follows:

x(x³ + 3x + 2) = (x - 1)(x + 8)
x⁴ + 3x² + 2x = x² + 7x - 8
x⁴ + 2x² - 5x + 8 = 0

This form is not directly suitable for successive approximation as it doesn't isolate x. Let's try another approach. From the original equation:

(x³ + 3x + 2) / (x + 8) = (x - 1) / x

Multiply both sides by x(x + 8):

x(x³ + 3x + 2) = (x - 1)(x + 8)
x⁴ + 3x² + 2x = x² + 7x - 8

Now, let's try to isolate x in a different way. We can rearrange the equation as:

x⁴ = -2x² + 5x - 8

This still doesn't give us x = h(x). Let's go back to:

x(x³ + 3x + 2) = (x - 1)(x + 8)
x⁴ + 3x² + 2x = x² + 7x - 8

Rearrange to isolate a single x term:

2x = -x⁴ - 2x² + 7x - 8

This is also not ideal. Instead, consider:

x⁴ + 2x² - 5x + 8 = 0

We can rewrite this as:

5x = x⁴ + 2x² + 8
x = (x⁴ + 2x² + 8) / 5

Now we have the form x = h(x), where h(x) = (x⁴ + 2x² + 8) / 5. This is the form we will use for our iterative process.

Iterative Process and Initial Guess

We now have our iterative formula:

xₙ₊₁ = h(xₙ) = (xₙ⁴ + 2xₙ² + 8) / 5

Before we begin iterating, we need an initial guess, x₀. A reasonable starting point is often x₀ = 1. This choice is arbitrary but serves as a starting point for the iterative process. The convergence and the value of the final approximation can be influenced by the initial guess. Therefore, it may be necessary to experiment with different initial guesses to find a satisfactory solution.

Three Iterations of Successive Approximation

Let's perform three iterations of the successive approximation method using the formula xₙ₊₁ = (xₙ⁴ + 2xₙ² + 8) / 5 and the initial guess x₀ = 1.

Iteration 1:

x₁ = h(x₀) = h(1) = (1⁴ + 2(1)² + 8) / 5 = (1 + 2 + 8) / 5 = 11 / 5 = 2.2

Iteration 2:

x₂ = h(x₁) = h(2.2) = ((2.2)⁴ + 2(2.2)² + 8) / 5 = (23.4256 + 9.68 + 8) / 5 = 41.1056 / 5 = 8.22112

Iteration 3:

x₃ = h(x₂) = h(8.22112) = ((8.22112)⁴ + 2(8.22112)² + 8) / 5 = (4572.35 + 135.28 + 8) / 5 = 4715.63 / 5 = 943.126

After three iterations, we have the following approximations:

  • x₁ = 2.2
  • x₂ = 8.22112
  • x₃ = 943.126

The values are diverging, which suggests that our chosen form of h(x) and/or initial guess may not be ideal for convergence. This highlights a crucial aspect of the successive approximation method: the rearrangement of the equation and the choice of initial guess significantly impact convergence. It is essential to analyze the behavior of the function and experiment with different forms of h(x) and initial guesses to achieve a stable solution.

Discussion and Convergence Analysis

Our initial iterations yielded values that rapidly increased, indicating a divergence from a potential solution. This divergence underscores the importance of carefully selecting the function h(x) in the successive approximation method. The convergence of the method is not guaranteed for all choices of h(x), even if a solution exists. A key factor influencing convergence is the derivative of h(x). If |h'(x)| < 1 in a neighborhood around the solution, the method is likely to converge. However, if |h'(x)| > 1, the method may diverge, as observed in our case.

In our example, h(x) = (x⁴ + 2x² + 8) / 5. The derivative h'(x) is:

h'(x) = (4x³ + 4x) / 5

For x = 2.2, h'(2.2) = (4(2.2)³ + 4(2.2)) / 5 = (42.6328 + 8.8) / 5 = 51.4328 / 5 ≈ 10.286. Since |h'(2.2)| > 1, the divergence is not unexpected. This highlights that the successive approximation method's performance is highly sensitive to the choice of h(x) and the initial guess.

To improve convergence, we need to find a different rearrangement of the equation f(x) = g(x) such that the derivative of the new h(x) is less than 1 in magnitude near the solution. This might involve more complex algebraic manipulations or the use of other numerical methods to obtain a better initial guess or a different form of the equation.

For instance, we could try to isolate x in a different way or use a numerical root-finding method like the Newton-Raphson method to get a better initial approximation. The Newton-Raphson method is another iterative technique that often converges faster than the successive approximation method, especially when the initial guess is close to the solution. However, it requires calculating the derivative of the function, which might not always be feasible.

Conclusion

In this article, we explored the successive approximation method for solving the equation f(x) = g(x), where f(x) = (x³ + 3x + 2) / (x + 8) and g(x) = (x - 1) / x. We rearranged the equation into the form x = h(x) and performed three iterations using an initial guess of x₀ = 1. The results showed divergence, highlighting the importance of carefully choosing the function h(x) and the initial guess to ensure convergence.

The successive approximation method is a powerful tool for approximating solutions to equations, but its success depends on the properties of the function and the initial guess. When the method diverges, alternative rearrangements of the equation or other numerical techniques may be necessary to find a solution. Understanding the theoretical underpinnings of the method, such as the contraction mapping theorem and the role of the derivative of h(x), is crucial for its effective application.

Further exploration could involve trying different rearrangements of the equation, using different initial guesses, or employing other numerical methods to find a more accurate solution. The successive approximation method serves as a valuable introduction to iterative techniques in numerical analysis, paving the way for more advanced methods and problem-solving strategies.