Wednesday, 8 January 2025

BCA Numerical Method 4th Semester

Numerical Method Concepts Quiz



 Topic: Solution of Nonlinear Equations

Solution of Nonlinear Equations (10 Hours)

Nonlinear equations are equations that involve nonlinear terms, such as variables raised to a power other than one, trigonometric functions, exponential functions, and more. Solving these equations is fundamental in numerical methods, as they arise in many engineering, physics, and mathematical applications.


1. Introduction to Nonlinear Equations

  • A nonlinear equation is an equation of the form f(x)=0f(x) = 0, where f(x)f(x) is a nonlinear function.
  • Examples include:
    • x24=0x^2 - 4 = 0
    • sin(x)x2=0\sin(x) - x^2 = 0
  • These equations generally cannot be solved analytically and require numerical methods.

2. Types of Equations

  • Algebraic Equations: Polynomial equations, e.g., x36x+2=0x^3 - 6x + 2 = 0.
  • Transcendental Equations: Equations involving trigonometric, exponential, or logarithmic functions, e.g., ex3sin(x)=0e^x - 3\sin(x) = 0.

3. Errors in Computing

Errors arise due to approximations in numerical methods. Common types include:

  • Absolute Error: Ea=xtruexapproxE_a = |x_{\text{true}} - x_{\text{approx}}|
  • Relative Error: Er=xtruexapproxxtrueE_r = \frac{|x_{\text{true}} - x_{\text{approx}}|}{|x_{\text{true}}|}
  • Truncation Error: Errors due to approximating a mathematical process.

4. Numerical Methods for Solving Nonlinear Equations


4.1 The Bisection Method

  • Principle: The method divides an interval [a,b][a, b] into halves and repeatedly checks where the root lies. It requires f(a)f(b)<0f(a)f(b) < 0.
  • Formula: c=a+b2,Check: f(c) to find the new interval.c = \frac{a+b}{2}, \quad \text{Check: } f(c) \text{ to find the new interval.}
  • Advantages: Simple and reliable.
  • Disadvantages: Slow convergence.

Example: Solve f(x)=x34x9=0f(x) = x^3 - 4x - 9 = 0 using the Bisection Method on [2,3][2, 3].

  1. f(2)=5f(2) = -5, f(3)=6f(3) = 6, f(2)f(3)<0f(2)f(3) < 0.
  2. Midpoint: c=2+32=2.5c = \frac{2+3}{2} = 2.5, f(2.5)=1.875f(2.5) = -1.875.
  3. Update interval to [2.5,3][2.5, 3].
  4. Repeat until desired accuracy.

Practice Question: Solve x25=0x^2 - 5 = 0 on [2,3][2, 3] using the Bisection Method.


4.2 The Method of False Position (Regula Falsi)

  • Principle: Similar to the Bisection Method but uses a linear approximation between points.
  • Formula: c=af(a)(ba)f(b)f(a)c = a - \frac{f(a)(b-a)}{f(b)-f(a)}
  • Advantages: Faster than Bisection.
  • Disadvantages: Can stagnate.

Example: Solve f(x)=x3x1=0f(x) = x^3 - x - 1 = 0 on [1,2][1, 2].

  1. f(1)=1f(1) = -1, f(2)=5f(2) = 5, f(1)f(2)<0f(1)f(2) < 0.
  2. c=11(21)5(1)=1.1667c = 1 - \frac{-1(2-1)}{5-(-1)} = 1.1667.
  3. Update interval and repeat.

Practice Question: Solve x26=0x^2 - 6 = 0 on [2,3][2, 3] using the Method of False Position.


4.3 Newton-Raphson Method

  • Principle: Uses the tangent line at a point to approximate the root.
  • Formula: xn+1=xnf(xn)f(xn)x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}
  • Advantages: Very fast convergence near the root.
  • Disadvantages: Requires derivative and may diverge.

Example: Solve f(x)=x22=0f(x) = x^2 - 2 = 0 with x0=1.5x_0 = 1.5.

  1. f(x)=x22f(x) = x^2 - 2, f(x)=2xf'(x) = 2x.
  2. x1=1.51.5222(1.5)=1.4167x_1 = 1.5 - \frac{1.5^2 - 2}{2(1.5)} = 1.4167.
  3. Repeat until desired accuracy.

Practice Question: Solve ln(x)+x2=0\ln(x) + x - 2 = 0 using Newton-Raphson with x0=1x_0 = 1.


4.4 Fixed-Point Iteration

  • Principle: Rewrites the equation as x=g(x)x = g(x) and iterates.
  • Formula: xn+1=g(xn)x_{n+1} = g(x_n)
  • Convergence Criterion: g(x)<1 near the root.|g'(x)| < 1 \text{ near the root.}

Example: Solve x3+x1=0x^3 + x - 1 = 0 by rewriting as x=1x3x = \sqrt[3]{1 - x}.

  1. g(x)=1x3g(x) = \sqrt[3]{1-x}, x0=0.5x_0 = 0.5.
  2. x1=10.53=0.7937x_1 = \sqrt[3]{1 - 0.5} = 0.7937.
  3. Repeat.

Practice Question: Solve x23=0x^2 - 3 = 0 by Fixed-Point Iteration.


4.5 Solution of a System of Nonlinear Equations

  • Uses extensions of methods like Newton-Raphson.
  • Example: Solve: f(x,y)=x2+y24=0,g(x,y)=x2y1=0f(x, y) = x^2 + y^2 - 4 = 0, \quad g(x, y) = x^2 - y - 1 = 0 using an iterative approach.

Summary

  • Bisection Method: Reliable, slower.
  • Method of False Position: Faster, may stagnate.
  • Newton-Raphson: Fast, requires derivative.
  • Fixed-Point Iteration: Simple, depends on g(x)<1g'(x) < 1.
  • System of Equations: Extension of Newton-Raphson.

Practice Problems

  1. Solve x35x+3=0x^3 - 5x + 3 = 0 using Newton-Raphson with x0=1x_0 = 1.
  2. Solve sin(x)x/2=0\sin(x) - x/2 = 0 on [1,2][1, 2] using the Method of False Position.
  3. Solve the system: x2+y2=5,xy=1x^2 + y^2 = 5, \quad x - y = 1 using a numerical method.


 Topic: Interpolation and Approximation

Interpolation and Approximation (10 Hours)

Interpolation and approximation are methods used to estimate values of a function based on known data points. These techniques are crucial in numerical analysis for solving engineering and scientific problems where analytical solutions are impractical.


1. Introduction

  • Interpolation: Estimation of function values at intermediate points using known data points.
  • Approximation: Finding a function that closely represents a given set of data points, even if the function doesn't pass through them exactly.

Applications

  1. Predicting missing data points.
  2. Smoothing noisy data.
  3. Constructing simpler models for complex functions.

2. Errors in Polynomial Interpolation

The error in interpolation is given by:

E(x)=f(x)Pn(x)=f(n+1)(ξ)(n+1)!i=0n(xxi)E(x) = f(x) - P_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} \prod_{i=0}^{n} (x - x_i)

where Pn(x)P_n(x) is the interpolating polynomial, ξ\xi is a point in [x0,xn][x_0, x_n], and nn is the degree of the polynomial.

Key Points

  • Higher-degree polynomials may lead to oscillations (Runge's phenomenon).
  • The accuracy depends on the spacing of data points and the behavior of f(x)f(x).

Example: Given f(x)=exf(x) = e^x at x=0,1,2x = 0, 1, 2, find the interpolation error at x=1.5x = 1.5 using a second-degree polynomial.

Solution:

  1. Derive f(3)(x)=exf^{(3)}(x) = e^x.
  2. Compute maxf(3)(x)\max f^{(3)}(x) on [0,2][0, 2]: f(3)(x)e2f^{(3)}(x) \leq e^2.
  3. Compute E(1.5)=e26(1.50)(1.51)(1.52)=0.0205E(1.5) = \frac{e^2}{6} (1.5 - 0)(1.5 - 1)(1.5 - 2) = 0.0205.

Practice Question: Compute the interpolation error for f(x)=ln(x)f(x) = \ln(x) at x=2.5x = 2.5 using points x=1,2,3x = 1, 2, 3.


3. Lagrange's Polynomial

Lagrange's interpolation formula is:

P(x)=i=0nf(xi)j=0jinxxjxixjP(x) = \sum_{i=0}^{n} f(x_i) \prod_{\substack{j=0 \\ j \neq i}}^{n} \frac{x - x_j}{x_i - x_j}

Example:

Given f(1)=2f(1) = 2, f(2)=3f(2) = 3, f(4)=5f(4) = 5, find f(3)f(3).

Solution:

  1. Construct the Lagrange polynomial: P(x)=f(1)(x2)(x4)(12)(14)+f(2)(x1)(x4)(21)(24)+f(4)(x1)(x2)(41)(42)P(x) = f(1) \frac{(x-2)(x-4)}{(1-2)(1-4)} + f(2) \frac{(x-1)(x-4)}{(2-1)(2-4)} + f(4) \frac{(x-1)(x-2)}{(4-1)(4-2)}
  2. Substitute f(1)=2f(1) = 2, f(2)=3f(2) = 3, f(4)=5f(4) = 5, and simplify.
  3. Evaluate P(3)=4P(3) = 4.

Practice Question: Use Lagrange's interpolation to find f(2.5)f(2.5) for f(1)=1f(1) = 1, f(2)=4f(2) = 4, f(3)=9f(3) = 9.


4. Newton’s Interpolation

4.1 Using Divided Differences

Newton's formula is:

P(x)=f(x0)+(xx0)f[x0,x1]+(xx0)(xx1)f[x0,x1,x2]+P(x) = f(x_0) + (x-x_0)f[x_0, x_1] + (x-x_0)(x-x_1)f[x_0, x_1, x_2] + \ldots

where divided differences are:

f[xi,xi+1]=f(xi+1)f(xi)xi+1xi,f[xi,xi+1,xi+2]=f[xi+1,xi+2]f[xi,xi+1]xi+2xif[x_i, x_{i+1}] = \frac{f(x_{i+1}) - f(x_i)}{x_{i+1} - x_i}, \quad f[x_i, x_{i+1}, x_{i+2}] = \frac{f[x_{i+1}, x_{i+2}] - f[x_i, x_{i+1}]}{x_{i+2} - x_i}

Example: Given f(1)=2,f(2)=3,f(4)=5f(1) = 2, f(2) = 3, f(4) = 5, find f(3)f(3) using Newton’s divided differences.

Solution:

  1. Compute divided differences: f[x0,x1]=f(2)f(1)21=1,f[x1,x2]=f(4)f(2)42=1f[x_0, x_1] = \frac{f(2) - f(1)}{2 - 1} = 1, \quad f[x_1, x_2] = \frac{f(4) - f(2)}{4 - 2} = 1 f[x0,x1,x2]=f[x1,x2]f[x0,x1]41=0f[x_0, x_1, x_2] = \frac{f[x_1, x_2] - f[x_0, x_1]}{4 - 1} = 0
  2. Polynomial: P(x)=2+(x1)(1)+(x1)(x2)(0)=2+(x1)P(x) = 2 + (x-1)(1) + (x-1)(x-2)(0) = 2 + (x-1).
  3. Evaluate P(3)=2+2=4P(3) = 2 + 2 = 4.

Practice Question: Use Newton’s divided differences to interpolate f(2.5)f(2.5) for f(1)=2f(1) = 2, f(2)=4f(2) = 4, f(4)=8f(4) = 8.


4.2 Using Forward Differences

Newton's forward interpolation formula:

P(x)=y0+uΔy0+u(u1)2!Δ2y0+P(x) = y_0 + u\Delta y_0 + \frac{u(u-1)}{2!} \Delta^2 y_0 + \ldots

where u=xx0hu = \frac{x-x_0}{h} and hh is the step size.

Practice Question: Use Newton’s forward differences to estimate f(2.2)f(2.2) given x=2,3,4x = 2, 3, 4, and f(x)=4,8,16f(x) = 4, 8, 16.


5. Cubic Spline Interpolation

Cubic splines construct piecewise cubic polynomials that are smooth and continuous up to the second derivative.

Example:

Given x=[1,2,3]x = [1, 2, 3], f(x)=[2,3,5]f(x) = [2, 3, 5], construct the cubic spline.

Solution:

  1. Solve for coefficients ai,bi,ci,dia_i, b_i, c_i, d_i of each cubic polynomial: Si(x)=ai+bi(xxi)+ci(xxi)2+di(xxi)3S_i(x) = a_i + b_i(x-x_i) + c_i(x-x_i)^2 + d_i(x-x_i)^3
  2. Use continuity and smoothness conditions.

Practice Question: Construct a cubic spline for x=[1,2,3]x = [1, 2, 3] and f(x)=[1,4,9]f(x) = [1, 4, 9].


6. Least Squares Method

6.1 Linear Data

Fits a straight line y=mx+cy = mx + c to minimize:

(yimxic)2\sum (y_i - mx_i - c)^2

Example: Fit y=mx+cy = mx + c to (1,2),(2,3),(3,5)(1, 2), (2, 3), (3, 5).

Solution:

  1. Compute normal equations: y=mx+Nc,xy=mx2+cx\sum y = m\sum x + Nc, \quad \sum xy = m\sum x^2 + c\sum x
  2. Solve for m,cm, c.

Practice Question: Fit a line to (1,1),(2,4),(3,9)(1, 1), (2, 4), (3, 9).


6.2 Nonlinear Data

Transforms nonlinear models (e.g., exponential y=aebxy = ae^{bx}) to linear forms:

ln(y)=ln(a)+bx\ln(y) = \ln(a) + bx

Practice Question: Fit y=aebxy = ae^{bx} to (1,2.7),(2,7.4),(3,20.1)(1, 2.7), (2, 7.4), (3, 20.1).


 Topic: Numerical Differentiation and Integration

Numerical Differentiation and Integration (5 Hours)

Numerical differentiation and integration involve approximating derivatives and integrals when analytical methods are impractical or impossible. These techniques are widely used in engineering, physics, and other scientific disciplines.


1. Introduction to Numerical Differentiation

Numerical differentiation estimates derivatives using values of a function at discrete points. It is based on finite differences:

  • Forward Difference: f(x)f(x+h)f(x)hf'(x) \approx \frac{f(x+h) - f(x)}{h}
  • Backward Difference: f(x)f(x)f(xh)hf'(x) \approx \frac{f(x) - f(x-h)}{h}
  • Central Difference: f(x)f(x+h)f(xh)2hf'(x) \approx \frac{f(x+h) - f(x-h)}{2h}

Example:

Given f(x)=x2f(x) = x^2 at x=2x = 2, with h=0.1h = 0.1, find f(x)f'(x) using forward, backward, and central differences.

Solution:

  1. f(2)=4f(2) = 4, f(2.1)=4.41f(2.1) = 4.41, f(1.9)=3.61f(1.9) = 3.61.
  2. Forward Difference: f(2)f(2.1)f(2)0.1=4.4140.1=4.1f'(2) \approx \frac{f(2.1) - f(2)}{0.1} = \frac{4.41 - 4}{0.1} = 4.1
  3. Backward Difference: f(2)f(2)f(1.9)0.1=43.610.1=3.9f'(2) \approx \frac{f(2) - f(1.9)}{0.1} = \frac{4 - 3.61}{0.1} = 3.9
  4. Central Difference: f(2)f(2.1)f(1.9)2×0.1=4.413.610.2=4.0f'(2) \approx \frac{f(2.1) - f(1.9)}{2 \times 0.1} = \frac{4.41 - 3.61}{0.2} = 4.0

Practice Question: Use forward, backward, and central differences to find f(x)f'(x) for f(x)=sin(x)f(x) = \sin(x) at x=π/4x = \pi/4, h=0.1h = 0.1.


2. Newton's Differentiation Formulas

Newton’s forward and backward difference formulas extend numerical differentiation to tabular data.

Newton’s Forward Difference Formula:

f(x)Δf(x)hf'(x) \approx \frac{\Delta f(x)}{h}

Newton’s Backward Difference Formula:

f(x)f(x)hf'(x) \approx \frac{\nabla f(x)}{h}

Example:

Given the table:

xx 1 2 3
f(x)f(x) 1 4 9

Find f(x)f'(x) at x=2x = 2 using Newton’s forward formula.

Solution:

  1. Compute forward differences: Δf(x)=f(x+h)f(x),Δf(1)=41=3,Δf(2)=94=5\Delta f(x) = f(x+h) - f(x), \quad \Delta f(1) = 4-1 = 3, \quad \Delta f(2) = 9-4 = 5
  2. f(2)Δf(2)1=5f'(2) \approx \frac{\Delta f(2)}{1} = 5.

Practice Question: Use Newton’s forward difference formula to find f(x)f'(x) for the table:

xx 0.5 1.0 1.5
f(x)f(x) 0.25 1.00 2.25

3. Numerical Integration

Numerical integration approximates the value of definite integrals when the function cannot be integrated analytically.


3.1 Trapezoidal Rule

The integral is approximated by dividing the area under the curve into trapezoids:

abf(x)dxh2[f(x0)+2i=1n1f(xi)+f(xn)]\int_a^b f(x) \, dx \approx \frac{h}{2} \left[ f(x_0) + 2 \sum_{i=1}^{n-1} f(x_i) + f(x_n) \right]

where h=banh = \frac{b-a}{n}.

Example: Approximate 01x2dx\int_0^1 x^2 \, dx using the trapezoidal rule with n=2n = 2.

Solution:

  1. h=102=0.5h = \frac{1-0}{2} = 0.5, x0=0,x1=0.5,x2=1x_0 = 0, x_1 = 0.5, x_2 = 1.
  2. f(0)=02=0f(0) = 0^2 = 0, f(0.5)=0.25f(0.5) = 0.25, f(1)=1f(1) = 1.
  3. Apply the formula: 01x2dx0.52[0+2(0.25)+1]=0.375\int_0^1 x^2 \, dx \approx \frac{0.5}{2} [0 + 2(0.25) + 1] = 0.375

Practice Question: Approximate 02exdx\int_0^2 e^x \, dx using the trapezoidal rule with n=4n = 4.


3.2 Simpson’s 1/3 Rule

The integral is approximated using parabolic segments:

abf(x)dxh3[f(x0)+4i oddf(xi)+2i evenf(xi)+f(xn)]\int_a^b f(x) \, dx \approx \frac{h}{3} \left[ f(x_0) + 4 \sum_{i \text{ odd}} f(x_i) + 2 \sum_{i \text{ even}} f(x_i) + f(x_n) \right]

Example: Approximate 01x2dx\int_0^1 x^2 \, dx using Simpson’s 1/3 rule with n=2n = 2.

Solution:

  1. h=102=0.5h = \frac{1-0}{2} = 0.5, x0=0,x1=0.5,x2=1x_0 = 0, x_1 = 0.5, x_2 = 1.
  2. f(0)=02=0f(0) = 0^2 = 0, f(0.5)=0.25f(0.5) = 0.25, f(1)=1f(1) = 1.
  3. Apply the formula: 01x2dx0.53[0+4(0.25)+1]=0.333\int_0^1 x^2 \, dx \approx \frac{0.5}{3} [0 + 4(0.25) + 1] = 0.333

Practice Question: Approximate 0πsin(x)dx\int_0^\pi \sin(x) \, dx using Simpson’s 1/3 rule with n=4n = 4.


3.3 Simpson’s 3/8 Rule

The rule is:

abf(x)dx3h8[f(x0)+3i=1,3,f(xi)+2i=2,4,f(xi)+f(xn)]\int_a^b f(x) \, dx \approx \frac{3h}{8} \left[ f(x_0) + 3 \sum_{i=1,3,\ldots} f(x_i) + 2 \sum_{i=2,4,\ldots} f(x_i) + f(x_n) \right]

Practice Question: Use Simpson’s 3/8 rule to approximate 02x3dx\int_0^2 x^3 \, dx with n=3n = 3.


3.4 Romberg Integration

Romberg integration refines the trapezoidal rule using Richardson extrapolation.

Example: Approximate 01x2dx\int_0^1 x^2 \, dx using Romberg integration.

Practice Question: Use Romberg integration to approximate 12ln(x)dx\int_1^2 \ln(x) \, dx.


4. Numerical Double Integration

For double integrals:

abcdf(x,y)dxdy\int_a^b \int_c^d f(x, y) \, dx \, dy

numerical integration can be applied iteratively.

Example:

Approximate:

0101(x+y)dxdy\int_0^1 \int_0^1 (x + y) \, dx \, dy

using the trapezoidal rule.

Solution:

  1. Outer integral: 01g(y)dy\int_0^1 g(y) \, dy, where g(y)=01(x+y)dxg(y) = \int_0^1 (x+y) \, dx.
  2. Compute g(y)g(y) using the trapezoidal rule: g(y)=12[f(0,y)+f(1,y)]g(y) = \frac{1}{2} [f(0, y) + f(1, y)] and integrate g(y)g(y) with respect to yy.

Practice Question: Approximate:

0213(x2+y2)dxdy\int_0^2 \int_1^3 (x^2 + y^2) \, dx \, dy

using the trapezoidal rule.


 Topic: Solution of Linear Algebraic Equations

Solution of Linear Algebraic Equations (10 Hours)

Linear algebraic equations are equations involving linear combinations of variables. Numerical methods are often employed to solve systems of linear equations when analytical methods become infeasible for large systems. This module covers various methods for solving linear systems.


1. Review of the Existence of Solutions and Properties of Matrices

Existence of Solutions

  • A system of linear equations AX=BAX = B has:
    1. Unique solution: When det(A)0\det(A) \neq 0, AA is invertible.
    2. Infinite solutions: When AA is singular, and the system is consistent.
    3. No solution: When the system is inconsistent.

Properties of Matrices

  • Square matrix: AA has the same number of rows and columns.
  • Symmetric matrix: A=ATA = A^T.
  • Diagonal matrix: Only diagonal elements are nonzero.
  • Singular matrix: det(A)=0\det(A) = 0.

Consistency of a Linear System

A system is consistent if there exists at least one solution. To check:

  1. Compute the rank of AA (coefficient matrix) and augmented matrix [AB][A | B].
  2. If rank(A)=rank([AB])\text{rank}(A) = \text{rank}([A|B]), the system is consistent.

Example: Solve the system using rank:

x+y+z=6,2x+3y+z=14,x+2yz=1x + y + z = 6, \quad 2x + 3y + z = 14, \quad x + 2y - z = 1

Solution:

  1. Coefficient matrix: A=[111231121],B=[6141].A = \begin{bmatrix} 1 & 1 & 1 \\ 2 & 3 & 1 \\ 1 & 2 & -1 \end{bmatrix}, \quad B = \begin{bmatrix} 6 \\ 14 \\ 1 \end{bmatrix}.
  2. Augmented matrix: [AB]=[1116231141211].[A|B] = \begin{bmatrix} 1 & 1 & 1 & 6 \\ 2 & 3 & 1 & 14 \\ 1 & 2 & -1 & 1 \end{bmatrix}.
  3. Rank: rank(A)=rank([AB])=3    System is consistent.\text{rank}(A) = \text{rank}([A|B]) = 3 \implies \text{System is consistent.}

Practice Question: Check the consistency of the system:

x+2y+z=7,3x+yz=5,2x+y+z=8.x + 2y + z = 7, \quad 3x + y - z = 5, \quad 2x + y + z = 8.

2. Gaussian Elimination Method

Gaussian elimination transforms AX=BAX = B into an upper triangular form for back substitution.

Steps:

  1. Augment AA with BB.
  2. Use row operations to create zeros below the diagonal.
  3. Solve the triangular system using back substitution.

Example: Solve:

x+y+z=6,2x+3y+z=14,x+2yz=1.x + y + z = 6, \quad 2x + 3y + z = 14, \quad x + 2y - z = 1.

Solution:

  1. Augmented matrix: [AB]=[1116231141211].[A|B] = \begin{bmatrix} 1 & 1 & 1 & 6 \\ 2 & 3 & 1 & 14 \\ 1 & 2 & -1 & 1 \end{bmatrix}.
  2. Eliminate xx from rows 2 and 3: [111601120125].\begin{bmatrix} 1 & 1 & 1 & 6 \\ 0 & 1 & -1 & 2 \\ 0 & 1 & -2 & -5 \end{bmatrix}.
  3. Eliminate yy from row 3: [111601120017].\begin{bmatrix} 1 & 1 & 1 & 6 \\ 0 & 1 & -1 & 2 \\ 0 & 0 & -1 & -7 \end{bmatrix}.
  4. Back substitution: z=7,y=9,x=10.z = 7, \quad y = 9, \quad x = -10.

Practice Question: Solve using Gaussian elimination:

2x+3y+z=9,4x+y+2z=10,3x+2y+3z=14.2x + 3y + z = 9, \quad 4x + y + 2z = 10, \quad 3x + 2y + 3z = 14.

3. Gauss-Jordan Method

Gauss-Jordan elimination reduces the matrix to reduced row echelon form (RREF).

Steps:

  1. Augment AA with BB.
  2. Use row operations to create zeros above and below each pivot.

Example: Solve:

x+y+z=6,2x+3y+z=14,x+2yz=1.x + y + z = 6, \quad 2x + 3y + z = 14, \quad x + 2y - z = 1.

Solution:

  1. Augment AA with BB.
  2. Transform to RREF: [1001001090017].\begin{bmatrix} 1 & 0 & 0 & -10 \\ 0 & 1 & 0 & 9 \\ 0 & 0 & 1 & 7 \end{bmatrix}.
  3. Solution: x=10,y=9,z=7x = -10, y = 9, z = 7.

Practice Question: Solve using Gauss-Jordan:

x+2y+3z=14,2xy+z=3,3x+y2z=4.x + 2y + 3z = 14, \quad 2x - y + z = 3, \quad 3x + y - 2z = 4.

4. Inverse of a Matrix Using Gaussian Elimination

To find A1A^{-1}, augment AA with the identity matrix and use Gaussian elimination to reduce AA to the identity matrix.

Example: Find the inverse of:

A=[2113].A = \begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix}.

Solution:

  1. Augment with identity: [AI]=[21101301].[A|I] = \begin{bmatrix} 2 & 1 & 1 & 0 \\ 1 & 3 & 0 & 1 \end{bmatrix}.
  2. Reduce to identity: A1=[0.60.20.20.4].A^{-1} = \begin{bmatrix} 0.6 & -0.2 \\ -0.2 & 0.4 \end{bmatrix}.

Practice Question: Find the inverse of:

A=[1234].A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}.

5. Method of Factorization

LU Decomposition

Decompose AA into LL (lower triangular) and UU (upper triangular) matrices.

Example:

Factorize:

A=[2347].A = \begin{bmatrix} 2 & 3 \\ 4 & 7 \end{bmatrix}.

Solution:

  1. ( L = \begin{bmatrix} 1 & 0 \ 2 & 1 \end{bmatrix}, \quad U = \begin{bmatrix} 2 & 3 \ 0 & 1 \end{bmatrix}. ]

Practice Question: Factorize:

A=[1123].A = \begin{bmatrix} 1 & 1 \\ 2 & 3 \end{bmatrix}.

6. Iterative Methods

Jacobi Iteration

x(k+1)=D1(B(L+U)x(k))x^{(k+1)} = D^{-1}(B - (L+U)x^{(k)})

Gauss-Seidel Iteration

x(k+1)=(D+L)1(BUx(k))x^{(k+1)} = (D+L)^{-1}(B - Ux^{(k)})

Example: Solve:

4xy=7,x+3y=114x - y = 7, \quad x + 3y = 11

Practice Question: Solve:

3x+y=14,x+2y=93x + y = 14, \quad x + 2y = 9

7. Power Method

Find the largest eigenvalue and its eigenvector.

Example: Find the largest eigenvalue of:

A=[2112].A = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}.

Practice Question: Find the largest eigenvalue of:

A=[4123].

 Topic: Solution of Ordinary Differential Equations

Solution of Ordinary Differential Equations (ODEs) - 7 Hours

Ordinary Differential Equations (ODEs) are equations involving derivatives of a function with respect to a single variable. Numerical methods for ODEs are used when exact solutions are difficult or impossible to obtain.


1. Introduction to Differential Equations

  • General Form of ODE: dydx=f(x,y)\frac{dy}{dx} = f(x, y) where f(x,y)f(x, y) is a given function.
  • Order of ODE: The order is determined by the highest derivative present.
  • Initial Value Problem (IVP): Solve dydx=f(x,y)\frac{dy}{dx} = f(x, y) given y(x0)=y0y(x_0) = y_0.

2. Taylor Series Method

The Taylor Series method expands y(x)y(x) around x0x_0:

y(x+h)=y(x)+hdydx+h22!d2ydx2+y(x+h) = y(x) + h \frac{dy}{dx} + \frac{h^2}{2!} \frac{d^2y}{dx^2} + \cdots

Example:

Solve dydx=x+y\frac{dy}{dx} = x + y, y(0)=1y(0) = 1, find y(0.1)y(0.1) using the Taylor series up to the second term.

Solution:

  1. dydx=x+y,d2ydx2=1+dydx=1+x+y\frac{dy}{dx} = x + y, \quad \frac{d^2y}{dx^2} = 1 + \frac{dy}{dx} = 1 + x + y.
  2. Using y(0)=1y(0) = 1: y(0.1)1+0.1(0+1)+(0.1)22(1+0+1)=1.105.y(0.1) \approx 1 + 0.1(0 + 1) + \frac{(0.1)^2}{2}(1 + 0 + 1) = 1.105.

Practice Question: Solve dydx=x2y\frac{dy}{dx} = x^2 - y, y(0)=2y(0) = 2, find y(0.1)y(0.1) using the Taylor series up to the third term.


3. Picard's Method

Picard's iteration provides successive approximations to the solution:

yn+1(x)=y0+x0xf(t,yn(t))dty_{n+1}(x) = y_0 + \int_{x_0}^x f(t, y_n(t)) \, dt

Example:

Solve dydx=x+y,y(0)=1\frac{dy}{dx} = x + y, \, y(0) = 1, for x[0,0.1]x \in [0, 0.1].

Solution:

  1. First approximation: y1(x)=1+0x(t+1)dt=1+[t+t22]0x=1+x+x22.y_1(x) = 1 + \int_0^x (t + 1) \, dt = 1 + \left[t + \frac{t^2}{2}\right]_0^x = 1 + x + \frac{x^2}{2}.
  2. Second approximation: Substitute y1(x)y_1(x) into the integral.

Practice Question: Solve dydx=yx,y(0)=1\frac{dy}{dx} = y - x, \, y(0) = 1, for x[0,0.1]x \in [0, 0.1].


4. Euler's Method

Euler's method approximates yy using:

yn+1=yn+hf(xn,yn)y_{n+1} = y_n + h f(x_n, y_n)

where hh is the step size.

Example:

Solve dydx=x+y,y(0)=1\frac{dy}{dx} = x + y, \, y(0) = 1, find y(0.1)y(0.1) using h=0.1h = 0.1.

Solution:

  1. x0=0,y0=1,f(x,y)=x+yx_0 = 0, y_0 = 1, f(x, y) = x + y.
  2. y1=y0+hf(x0,y0)=1+0.1(0+1)=1.1y_1 = y_0 + h f(x_0, y_0) = 1 + 0.1(0 + 1) = 1.1.

Practice Question: Solve dydx=xy,y(0)=2\frac{dy}{dx} = x - y, \, y(0) = 2, for y(0.2)y(0.2) using h=0.1h = 0.1.


5. Heun's Method

Heun's method improves Euler's method by averaging the slopes:

yn+1=yn+h2[f(xn,yn)+f(xn+1,yn)]y_{n+1} = y_n + \frac{h}{2} \left[ f(x_n, y_n) + f(x_{n+1}, y^*_n) \right]

where yn=yn+hf(xn,yn)y^*_n = y_n + h f(x_n, y_n).

Example:

Solve dydx=x+y,y(0)=1\frac{dy}{dx} = x + y, \, y(0) = 1, find y(0.1)y(0.1).

Solution:

  1. Predictor (Euler’s step): y1=y0+hf(x0,y0)=1+0.1(0+1)=1.1.y^*_1 = y_0 + h f(x_0, y_0) = 1 + 0.1(0 + 1) = 1.1.
  2. Corrector: y1=y0+0.12[f(0,1)+f(0.1,1.1)]=1+0.05(1+1.2)=1.11.y_1 = y_0 + \frac{0.1}{2} \left[ f(0, 1) + f(0.1, 1.1) \right] = 1 + 0.05(1 + 1.2) = 1.11.

Practice Question: Solve dydx=x2+y,y(0)=1\frac{dy}{dx} = x^2 + y, \, y(0) = 1, for y(0.2)y(0.2) using Heun's method.


6. Runge-Kutta Methods

Second-Order Runge-Kutta (RK2):

yn+1=yn+hk2,k1=f(xn,yn),k2=f(xn+h2,yn+h2k1)y_{n+1} = y_n + h k_2, \quad k_1 = f(x_n, y_n), \quad k_2 = f(x_n + \frac{h}{2}, y_n + \frac{h}{2} k_1)

Fourth-Order Runge-Kutta (RK4):

yn+1=yn+h6(k1+2k2+2k3+k4)y_{n+1} = y_n + \frac{h}{6} (k_1 + 2k_2 + 2k_3 + k_4)

where:

k1=f(xn,yn),k2=f(xn+h2,yn+h2k1),k_1 = f(x_n, y_n), \quad k_2 = f(x_n + \frac{h}{2}, y_n + \frac{h}{2} k_1), k3=f(xn+h2,yn+h2k2),k4=f(xn+h,yn+hk3).k_3 = f(x_n + \frac{h}{2}, y_n + \frac{h}{2} k_2), \quad k_4 = f(x_n + h, y_n + h k_3).

Example: Solve dydx=x+y,y(0)=1\frac{dy}{dx} = x + y, \, y(0) = 1, find y(0.1)y(0.1) using RK4 with h=0.1h = 0.1.

Solution:

  1. k1=f(0,1)=1k_1 = f(0, 1) = 1, k2=f(0.05,1.05)=1.1,k3=f(0.05,1.055)=1.105,k4=f(0.1,1.1105)=1.2105.k_2 = f(0.05, 1.05) = 1.1, \quad k_3 = f(0.05, 1.055) = 1.105, \quad k_4 = f(0.1, 1.1105) = 1.2105.
  2. y(0.1)=1+0.16(1+2(1.1)+2(1.105)+1.2105)=1.1103.y(0.1) = 1 + \frac{0.1}{6} (1 + 2(1.1) + 2(1.105) + 1.2105) = 1.1103.

Practice Question: Solve dydx=x2+y,y(0)=1\frac{dy}{dx} = x^2 + y, \, y(0) = 1, for y(0.1)y(0.1) using RK4.


7. Solutions of Higher Order Equations

Higher-order ODEs can be reduced to a system of first-order equations:

y=f(x,y,y)    z1=y,z2=y,z2=f(x,z1,z2).y'' = f(x, y, y') \quad \implies \quad z_1 = y, \, z_2 = y', \, z_2' = f(x, z_1, z_2).

Example: Solve y+y=0y'' + y = 0, y(0)=1,y(0)=0y(0) = 1, y'(0) = 0, for y(0.1)y(0.1).

Solution:

  1. Convert to first-order: z1=z2,z2=z1.z_1' = z_2, \, z_2' = -z_1.
  2. Use RK4 or other methods to solve.

Practice Question: Solve y2y+y=0y'' - 2y' + y = 0, y(0)=1,y(0)=2y(0) = 1, y'(0) = 2, for y(0.1)y(0.1).


8. Boundary Value Problems (BVPs)

In BVPs, conditions are specified at multiple points. Common methods include:

  • Shooting Method: Convert the BVP to an IVP and iterate to match boundary conditions.

Algorithm for Shooting Method:

  1. Guess an initial slope y(x0)y'(x_0). 2

. Solve the IVP using numerical methods. 3. Adjust the guess until the boundary condition is satisfied.

Example: Solve y+y=0y'' + y = 0, y(0)=0,y(π/2)=1y(0) = 0, y(\pi/2) = 1.

Practice Question: Solve yy=0y'' - y = 0, y(0)=1,y(1)=2y(0) = 1, y(1) = 2, using the shooting method.


 Topic: Solution of Partial Differential Equations

Solution of Partial Differential Equations (PDEs) - 5 Hours

Partial Differential Equations (PDEs) involve multiple independent variables and their partial derivatives. These equations are fundamental in modeling physical phenomena such as heat conduction, fluid dynamics, and wave propagation.


1. Introduction to Partial Differential Equations

General Form of PDEs

A PDE is expressed as:

F(x1,x2,,u,ux1,ux2,,2ux12,)=0F\left(x_1, x_2, \dots, u, \frac{\partial u}{\partial x_1}, \frac{\partial u}{\partial x_2}, \dots, \frac{\partial^2 u}{\partial x_1^2}, \dots \right) = 0

where u=u(x1,x2,)u = u(x_1, x_2, \dots) is the dependent variable.

Types of PDEs

  1. Elliptic PDEs: 2u=0\nabla^2 u = 0 (e.g., Laplace's equation).
  2. Parabolic PDEs: utα2u=0\frac{\partial u}{\partial t} - \alpha \nabla^2 u = 0 (e.g., Heat equation).
  3. Hyperbolic PDEs: 2ut2c22u=0\frac{\partial^2 u}{\partial t^2} - c^2 \nabla^2 u = 0 (e.g., Wave equation).

2. Deriving Difference Equations

Finite Difference Approximations

To solve PDEs numerically, the domain is discretized, and derivatives are approximated using finite differences.

  • Forward Difference:

    uxu(x+h)u(x)h\frac{\partial u}{\partial x} \approx \frac{u(x+h) - u(x)}{h}
  • Backward Difference:

    uxu(x)u(xh)h\frac{\partial u}{\partial x} \approx \frac{u(x) - u(x-h)}{h}
  • Central Difference:

    uxu(x+h)u(xh)2h\frac{\partial u}{\partial x} \approx \frac{u(x+h) - u(x-h)}{2h}
  • Second Derivative:

    2ux2u(x+h)2u(x)+u(xh)h2\frac{\partial^2 u}{\partial x^2} \approx \frac{u(x+h) - 2u(x) + u(x-h)}{h^2}

Example:

Discretize 2ux2+2uy2=0\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0.

Solution:

  1. Divide the domain into a grid with spacing hh in both xx and yy.
  2. Replace derivatives with finite differences: 2ux2+2uy2ui+1,j2ui,j+ui1,jh2+ui,j+12ui,j+ui,j1h2.\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \approx \frac{u_{i+1,j} - 2u_{i,j} + u_{i-1,j}}{h^2} + \frac{u_{i,j+1} - 2u_{i,j} + u_{i,j-1}}{h^2}.
  3. Combine terms: 4ui,j+ui+1,j+ui1,j+ui,j+1+ui,j1=0.-4u_{i,j} + u_{i+1,j} + u_{i-1,j} + u_{i,j+1} + u_{i,j-1} = 0.

Practice Question: Derive the finite difference approximation for 2ux22uy2=0\frac{\partial^2 u}{\partial x^2} - \frac{\partial^2 u}{\partial y^2} = 0.


3. Laplace's Equation

Form:

2u=0,or2ux2+2uy2=0\nabla^2 u = 0, \quad \text{or} \quad \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0
  • Governs steady-state phenomena like heat distribution and electrostatics.

Numerical Solution:

Use the finite difference approximation:

ui,j=14(ui+1,j+ui1,j+ui,j+1+ui,j1).u_{i,j} = \frac{1}{4}(u_{i+1,j} + u_{i-1,j} + u_{i,j+1} + u_{i,j-1}).

Example:

Solve 2u=0\nabla^2 u = 0 on a 2D square grid of size 4×44 \times 4 with boundary conditions:

  • u=0u = 0 on all sides except u=100u = 100 at x=1x = 1.

Solution:

  1. Set up the grid: [00000u1,1u1,21000u2,1u2,2100000100].\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & u_{1,1} & u_{1,2} & 100 \\ 0 & u_{2,1} & u_{2,2} & 100 \\ 0 & 0 & 0 & 100 \end{bmatrix}.
  2. Apply the finite difference formula iteratively: ui,j=14(ui+1,j+ui1,j+ui,j+1+ui,j1).u_{i,j} = \frac{1}{4}(u_{i+1,j} + u_{i-1,j} + u_{i,j+1} + u_{i,j-1}).
  3. Iterations:
    • u1,1=14(0+u2,1+u1,2+0)u_{1,1} = \frac{1}{4}(0 + u_{2,1} + u_{1,2} + 0).
    • Solve iteratively until convergence.

Practice Question: Solve 2u=0\nabla^2 u = 0 on a 3×33 \times 3 grid with u=0u = 0 everywhere except u=50u = 50 on the top edge.


4. Poisson's Equation

Form:

2u=f(x,y),or2ux2+2uy2=f(x,y)\nabla^2 u = f(x, y), \quad \text{or} \quad \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = f(x, y)
  • Governs problems with distributed sources (e.g., heat with internal generation).

Numerical Solution:

Use the finite difference approximation:

ui,j=14(ui+1,j+ui1,j+ui,j+1+ui,j1h2fi,j).u_{i,j} = \frac{1}{4}(u_{i+1,j} + u_{i-1,j} + u_{i,j+1} + u_{i,j-1} - h^2 f_{i,j}).

Example:

Solve 2u=4\nabla^2 u = -4 on a 3×33 \times 3 grid with u=0u = 0 on the boundaries.

Solution:

  1. Discretize: ui,j=14(ui+1,j+ui1,j+ui,j+1+ui,j1+4h2).u_{i,j} = \frac{1}{4}(u_{i+1,j} + u_{i-1,j} + u_{i,j+1} + u_{i,j-1} + 4h^2).
  2. Start with h=1h = 1: u1,1=14(0+0+0+0+4).u_{1,1} = \frac{1}{4}(0 + 0 + 0 + 0 + 4).
  3. Iterate until convergence.

Practice Question: Solve 2u=x2+y2\nabla^2 u = x^2 + y^2 on a 4×44 \times 4 grid with u=0u = 0 on the boundaries.


Summary

  • Laplace's Equation describes steady-state processes.
  • Poisson's Equation describes processes with distributed sources.
  • Finite Difference Method is key to solving PDEs numerically.
  • Iterative methods like Gauss-Seidel are used for solving grid-based equations efficiently.

 Topic: Lab Works

Laboratory Works: Numerical Methods Implementation

Laboratory sessions focus on developing and testing programs to solve mathematical problems numerically using C or C++ Builder. Below are the detailed outlines, algorithms, and example programs for each topic.


1. Non-linear Equations

Objective:

Write a program to solve a non-linear equation using methods like the Bisection Method, Newton-Raphson Method, or Fixed Point Iteration.

Example: Newton-Raphson Method

Algorithm:

  1. Define the function f(x)f(x) and its derivative f(x)f'(x).
  2. Choose an initial guess x0x_0.
  3. Iteratively update xx using: xn+1=xnf(xn)f(xn).x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}.
  4. Stop when f(xn)<tolerance|f(x_n)| < \text{tolerance}.

Program in C:

#include <stdio.h>
#include <math.h>

double f(double x) {
    return x * x - 4; // Example function: x^2 - 4 = 0
}

double df(double x) {
    return 2 * x; // Derivative: 2x
}

int main() {
    double x0, tol, x1;
    int max_iter, iter = 0;

    printf("Enter initial guess: ");
    scanf("%lf", &x0);
    printf("Enter tolerance: ");
    scanf("%lf", &tol);
    printf("Enter maximum iterations: ");
    scanf("%d", &max_iter);

    do {
        x1 = x0 - f(x0) / df(x0);
        iter++;
        if (fabs(f(x1)) < tol) {
            printf("Root: %lf found in %d iterations\n", x1, iter);
            return 0;
        }
        x0 = x1;
    } while (iter < max_iter);

    printf("Failed to converge\n");
    return 1;
}

Practice: Write a program to solve sin(x)x3=0\sin(x) - x^3 = 0 using the Bisection Method.


2. Interpolation

Objective:

Develop a program to perform interpolation using Lagrange's or Newton's method.

Example: Newton's Divided Difference Method

Algorithm:

  1. Create a divided difference table for the given data points.
  2. Construct the Newton's interpolation polynomial.

Program in C:

#include <stdio.h>

void dividedDifferenceTable(double x[], double y[], int n, double diffTable[][10]) {
    for (int i = 0; i < n; i++) diffTable[i][0] = y[i];
    for (int j = 1; j < n; j++) {
        for (int i = 0; i < n - j; i++) {
            diffTable[i][j] = (diffTable[i + 1][j - 1] - diffTable[i][j - 1]) / (x[i + j] - x[i]);
        }
    }
}

double newtonInterpolation(double x[], double diffTable[][10], int n, double value) {
    double result = diffTable[0][0];
    double term = 1.0;
    for (int i = 1; i < n; i++) {
        term *= (value - x[i - 1]);
        result += term * diffTable[0][i];
    }
    return result;
}

int main() {
    int n;
    printf("Enter the number of data points: ");
    scanf("%d", &n);

    double x[n], y[n], diffTable[10][10] = {0};

    printf("Enter x and y values:\n");
    for (int i = 0; i < n; i++) {
        scanf("%lf %lf", &x[i], &y[i]);
    }

    dividedDifferenceTable(x, y, n, diffTable);

    double value;
    printf("Enter the value of x to interpolate: ");
    scanf("%lf", &value);

    double result = newtonInterpolation(x, diffTable, n, value);
    printf("Interpolated value at x = %lf is %lf\n", value, result);

    return 0;
}

Practice: Implement Lagrange's Interpolation for the same dataset.


3. Numerical Differentiation and Integration

Objective:

Write a program to compute numerical derivatives and integrals using Trapezoidal Rule or Simpson's Rule.

Example: Simpson's 1/3 Rule

Algorithm:

  1. Divide the interval into nn subintervals.
  2. Compute the integral: I=h3[f(x0)+4i=1,3,5,n1f(xi)+2i=2,4,6,n2f(xi)+f(xn)].I = \frac{h}{3} \left[f(x_0) + 4 \sum_{i=1,3,5,\dots}^{n-1} f(x_i) + 2 \sum_{i=2,4,6,\dots}^{n-2} f(x_i) + f(x_n)\right].

Program in C:

#include <stdio.h>
#include <math.h>

double f(double x) {
    return 1 / (1 + x * x); // Example function: 1 / (1 + x^2)
}

int main() {
    double a, b, h, sum = 0.0;
    int n;

    printf("Enter lower limit, upper limit, and subintervals: ");
    scanf("%lf %lf %d", &a, &b, &n);

    if (n % 2 != 0) {
        printf("Number of subintervals must be even\n");
        return 1;
    }

    h = (b - a) / n;

    for (int i = 1; i < n; i++) {
        if (i % 2 == 0) {
            sum += 2 * f(a + i * h);
        } else {
            sum += 4 * f(a + i * h);
        }
    }

    sum += f(a) + f(b);
    sum *= h / 3;

    printf("Integral: %lf\n", sum);
    return 0;
}

Practice: Compute the derivative of exe^x at x=1x = 1 using central difference approximation.


4. Solution of Linear Algebraic Equations

Objective:

Solve a system of linear equations using methods like Gaussian Elimination or Gauss-Seidel Iteration.

Example: Gaussian Elimination

Program in C:

#include <stdio.h>

int main() {
    int n;
    printf("Enter the number of equations: ");
    scanf("%d", &n);

    double a[n][n+1], x[n];

    printf("Enter the augmented matrix:\n");
    for (int i = 0; i < n; i++) {
        for (int j = 0; j <= n; j++) {
            scanf("%lf", &a[i][j]);
        }
    }

    // Forward elimination
    for (int i = 0; i < n; i++) {
        for (int j = i+1; j < n; j++) {
            double factor = a[j][i] / a[i][i];
            for (int k = 0; k <= n; k++) {
                a[j][k] -= factor * a[i][k];
            }
        }
    }

    // Back substitution
    for (int i = n-1; i >= 0; i--) {
        x[i] = a[i][n];
        for (int j = i+1; j < n; j++) {
            x[i] -= a[i][j] * x[j];
        }
        x[i] /= a[i][i];
    }

    printf("Solution:\n");
    for (int i = 0; i < n; i++) {
        printf("x%d = %lf\n", i+1, x[i]);
    }

    return 0;
}

Practice: Solve a system of equations using Gauss-Seidel iteration.


5. Solution of ODEs

Objective:

Implement numerical methods like Euler's Method or Runge-Kutta Methods.

Example: Runge-Kutta (4th Order)

Program in C:

#include <stdio.h>

double f(double x, double y) {
    return x + y; // Example ODE: dy/dx = x + y
}

int main() {
    double x0, y0, h, xn;
    int n;

    printf("Enter initial x, initial y, step size, and final x: ");
    scanf("%lf %lf %lf %lf", &x0, &y0, &h, &xn);

    while (x0 < xn) {
        double k1 = h * f(x0, y0);
        double k2 = h * f(x0 + h / 2, y0 + k1 / 2);
        double k3 = h * f(x0 + h / 2, y0 + k2 / 2);
        double k4 = h * f(x0 + h, y0 + k3);
        y0 += (k1 + 2*k2 + 2*k3 + k4) / 6;
        x0 += h;
    }

    printf("Solution at x = %

lf is y = %lf\n", xn, y0); return 0; }


**Practice:** Implement Euler's method for the same ODE.

---

### **6. Solution of PDEs**

#### **Objective:**  
Solve PDEs using finite difference methods.

#### **Example: Laplace's Equation**
Solve \( \nabla^2 u = 0 \) on a 2D grid using iterative methods.

**Practice:** Implement Jacobi iteration for solving Laplace's equation.


 Topic: Past Questions

 Topic: Past Questions 2080

Tribhuvan University

Institute of Science and Technology

2080

Bachelor Level / third-semester / Science

Computer Science and Information Technology( CSC212 )

Numerical Method

Full Marks: 60 + 20 + 20

Pass Marks: 24 + 8 + 8

Time: 3 Hours

Candidates are required to give their answers in their own words as far as practicable.

The figures in the margin indicate full marks.

Section A

Attempt any two questions.

1. How secant methods differs from Newton Rhapson method? Derive the formula for Secant Method. Solve the equation Cosx + 2Sinx – x² = 0 using Secant method. Assume error precision as 0.01. Discuss the drawbacks of the Newton Rhapson method.

2. Define the terms interpolation and extrapolation. Write down the algorithm and program for Newton’s divided difference interpolation.

3. How Gauss Jordan method differs from Gauss Elimination method? Solve the following system of equations using Gauss Jordan method. How can we use Gauss Jordan method to find the inverse of a matrix? Discuss.

2x – y + 4z = 15

2x + 3y – 2z = 4

3x + 2y – 4z = -4

Section B

Attempt any eight questions.

4. Define the terms approximate errot and relative approximate error? Discuss the working of Half Interval method for finding the roots of non-linear equation.

5. Construct Newton’s backward difference table for given data points and approximate the value of f(x) at x=45.

x

10

20

30

40

50

f(x)

0.985

0.934

0.866

0.766

0.643

6. Fit the quadratic curve through the following data points and estimate the value of f(x) at x=2.

x

1

3

4

5

6

y

2

7

8

7

5

7. Factorise the following matrix using Cholesky method.



8. How can we calculate derivatives of discrete (tabulated) functions? Write down its algorithm.

9. Find the following integral using composite trapezoidal rule for using 2 segments (k=2) and 4 segments (k=4).

24 (x³+2) dx

10. Approximate the solution of y’=3x², y(1)=1 using Taylor’s series method using first four terms. Approximate the value of y(2).

11. Solve the Poisson’s equation ²f = xy and f=2 on boundary by assuming square domain 0 ≤ x ≤ 3 and 0 ≤ y ≤ 3 and h=1.

12. Write down the program for solving ordinary differential equation using Heun’s method.

 

 Topic: Past Questions 2079

Tribhuvan University

Institute of Science and Technology

2079

Bachelor Level / third-semester / Science

Computer Science and Information Technology( CSC212 )

Numerical Method

Full Marks: 60 + 20 + 20

Pass Marks: 24 + 8 + 8

Time: 3 Hours

Candidates are required to give their answers in their own words as far as practicable.

The figures in the margin indicate full marks.

Section A

Attempt any two questions.

1 .How secant method can approximate the root of a non-linear equation? Explain with necessary derivation. Estimate a real root of following equation using secant method. Assume error precisionn of 0.01.

x3 + 2x – cos(x) = 4

2 .How spline interpolation differs with the Langrage’s interpolation? Estimate the value of f(0) and f(4) using cubic spline interpolation from the following data.

x

-1

1

2

3

f(x)

-10

-2

14

86

3 .What is pivoting? Why is it necessary? Write an algorithm and program to solve the set of n linear equations using Gaussian elimination method.

Section B

Attempt any eight questions.

4 .Calculate a real root of the following function using bisection method correct upto 3 significant figures.

x2 – e-x = 3

5. What is fixed point iteration method? How can it converge to the root of a non-linear equation? Also explain the diverging cases with suitable examples.

6 .Write down the program for solving ordinary differential equation using Heun’s method.

7 .Fit the quadratic function for the data given below using least square method.

x

1.0

1.5

2.0

2.5

3.0

3.5

4.0

f(x)

2.7

4

5.8

8.3

11.2

15

19

8 .Estimate the integral value of following function from x =1.2 to 2.4 using Simpson’s 1/3 rule.

x

1.0

1.2

1.4

1.6

1.8

2.0

2.2

2.4

2.6

f(x)

1.53

2.25

3.18

4.32

5.67

7.23

8.98

10.94

13.08

9. What is Gaussian integration formula? Evaluate the following integration using Gaussian integration three ordinate formula

01 Sinx/x dx

10. Solve the following set of equations using Gauss Siedal method.

x + 2y + 3z = 4

6x + 4y + 5z = 16

5x + 2y + 3z = 12

11 .Solve the following differential equation for 0 ≤ x ≤ 1 taking h=0.5 using Runge Kutta 4th order method.

y'(x) + y = 3x with y(0)=2

12. Solve the Poisson’s equation 2f=3x2y over the square domain 0 ≤ x ≤ 3, 0 ≤ y ≤ 3 with f=0 on the boundary and h=1.

 

 Topic: Past Questions 2078

Tribhuvan University

Institute of Science and Technology

2078

Bachelor Level / third-semester / Science

Computer Science and Information Technology( CSC212 )

Numerical Method

Full Marks: 60 + 20 + 20

Pass Marks: 24 + 8 + 8

Time: 3 Hours

Candidates are required to give their answers in their own words as far as practicable.

The figures in the margin indicate full marks.

Section A

Attempt any TWO questions:

1 .How can  Horner’s rule be used to evaluate the f(x) and f(x) of a polynomial at a given point? Explain. Write an algorithm and program to calculate a real root of a polynomial using Horner’s rule.

2 Write matrix factorization? How can be used to solve a system of linear equations? Factorize the given matrix A and solve the system of equations Ax = b for given b using L and U matrices.

A =  and b = 

3 .What is a higher-order differential equation? How can you solve the higher-order differential equation? Explain. Solve the following differential equation for 1 ≤ x ≥ 2, taking h = 0.25

, width y(1) = 1 and y(1) = 2

Section B

Attempt any EIGHT questions:

4 .How the half-interval method can be estimate a root of a non-linear equation? Find a real root of the following equation using the half-interval method to correct up to two decimal places.

x2 – e-x – x = 1

5 .Calculate the real root of the given equation using fixed point iteration correct up to 3 significant figures.

2x3 – 2x = 5

6 .What is Newton’s interpolation? Obtain the divided difference table from the following data set and estimate the f(x) at x = 2 and x = 5.

x

3.2

2.7

1.0

4.8

5.6

f(x)

22.0

17.8

14.2

38.3

51.7

7 .What is linear regression? Fit the linear function to the following data

x

1.0

1.2

1.4

1.6

1.8

2.0

2.2

2.4

f(x)

2.0

2.6

3.9

6.0

9.3

15

20.6

30.4

8 .What are the problems with polynomial interpolation for a large number of data set? How such problems are addressed? Explain with an example.

9 .Evaluate the following integration using Romberg integration.

∫10sin2xxdx∫01sin2xxdx

10 .Solve the following set of linear equations using the Gauss-Jordan method.

x2 + 2x3 + 3x4 = 9

7x1 + 6x2 + 5x3 + 4x4 = 33

8x1 + 9x2 + x4 = 27

2x1 + 5x2 + 4x3 + 3x4 = 23

11 .Solve the following differential equation for 1 ≤ x ≤ 2, taking h = 0.25 using Heun’s method.

y(x) + x2y = 3x, with y(1) = 1

12. Consider a metallic plate of size 90cm by 90cm. The two adjacent sides of the plate are maintained at a temperature of 1000C and the remaining two adjacent sides are held at 2000C. Calculate the steady-state temperature at interior points assuming a grid size of 30 cm by 30 cm.

 

 Topic: Past Questions Solutions

 Topic: Past Questions 2080 Solutions


Section A


1. How secant method differs from Newton-Raphson method? Derive the formula for Secant Method. Solve the equation Cos(x) + 2Sin(x) - x² = 0 using Secant method. Assume error precision as 0.01. Discuss the drawbacks of the Newton-Raphson method.

Secant Method vs Newton-Raphson Method:

  • Newton-Raphson Method: It is based on an approximation of the function’s tangent at the current guess. The formula is:

    xn+1=xnf(xn)f(xn)x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}

    Here, f(xn)f'(x_n) is the derivative of the function at xnx_n.

  • Secant Method: It approximates the derivative by using two previous points, instead of calculating the derivative explicitly. The formula is:

    xn+1=xnf(xn)(xnxn1)f(xn)f(xn1)x_{n+1} = x_n - \frac{f(x_n) (x_n - x_{n-1})}{f(x_n) - f(x_{n-1})}

    In the Secant method, no derivative is required. This is an advantage when the derivative of the function is difficult to compute.

Derivation of the Secant Method: The Secant method uses the equation of the secant line passing through two points (xn,f(xn))(x_n, f(x_n)) and (xn1,f(xn1))(x_{n-1}, f(x_{n-1})). The formula for the secant line is given by:

f(x)=f(xn)+f(xn)f(xn1)xnxn1(xxn)f(x) = f(x_n) + \frac{f(x_n) - f(x_{n-1})}{x_n - x_{n-1}} (x - x_n)

Setting f(x)=0f(x) = 0 (to find the root), we get the Secant method formula:

xn+1=xnf(xn)(xnxn1)f(xn)f(xn1)x_{n+1} = x_n - \frac{f(x_n)(x_n - x_{n-1})}{f(x_n) - f(x_{n-1})}

Solving the Equation using Secant Method: We are tasked with solving:

cos(x)+2sin(x)x2=0\cos(x) + 2\sin(x) - x^2 = 0

using the Secant method.

  1. Initial guesses: Let’s assume x0=0.5x_0 = 0.5 and x1=1.0x_1 = 1.0.
  2. Function and iteration:
    • f(x)=cos(x)+2sin(x)x2f(x) = \cos(x) + 2\sin(x) - x^2
  3. Iterations: xn+1=xnf(xn)(xnxn1)f(xn)f(xn1)x_{n+1} = x_n - \frac{f(x_n) (x_n - x_{n-1})}{f(x_n) - f(x_{n-1})} Compute values of f(x)f(x), update xn+1x_{n+1}, and stop when the error is less than 0.01.

Drawbacks of the Newton-Raphson Method:

  • Requires the derivative of the function, which might be difficult or impossible to compute.
  • The method might not converge if the initial guess is far from the root.
  • It can fail if the derivative at the root is zero or very small.

2. Define the terms interpolation and extrapolation. Write down the algorithm and program for Newton’s divided difference interpolation.

  • Interpolation: Finding the value of the function within the range of known data points.
  • Extrapolation: Finding the value of the function outside the range of known data points.

Algorithm for Newton’s Divided Difference Interpolation:

  1. Create a divided difference table for the given data points.
  2. The first column contains the values of f(x)f(x).
  3. Each subsequent column contains the divided differences.
  4. The interpolating polynomial is given by: P(x)=f(x0)+(xx0)f[x0,x1]+(xx0)(xx1)f[x0,x1,x2]+P(x) = f(x_0) + (x - x_0) \cdot f[x_0, x_1] + (x - x_0)(x - x_1) \cdot f[x_0, x_1, x_2] + \dots
  5. Evaluate the polynomial at the desired point.

3. How Gauss-Jordan method differs from Gauss Elimination method? Solve the following system of equations using Gauss-Jordan method. How can we use Gauss-Jordan method to find the inverse of a matrix? Discuss.

  • Gauss-Jordan Method: It reduces the augmented matrix to the reduced row echelon form (RREF), with the goal of solving the system directly.
  • Gauss Elimination Method: It reduces the matrix to upper triangular form and then performs back substitution.

Solving the system using Gauss-Jordan Method: Given the system of equations:

2xy+4z=152x - y + 4z = 15 2x+3y2z=42x + 3y - 2z = 4 3x+2y4z=43x + 2y - 4z = -4
  1. Write the augmented matrix for the system:
[2141523243244]\begin{bmatrix} 2 & -1 & 4 & 15 \\ 2 & 3 & -2 & 4 \\ 3 & 2 & -4 & -4 \end{bmatrix}
  1. Use row operations to reduce the matrix to reduced row echelon form.
  2. The solution will be obtained from the last column.

Finding the Inverse of a Matrix Using Gauss-Jordan Method: To find the inverse of a matrix, augment the matrix with the identity matrix and use Gauss-Jordan elimination to reduce the matrix to the identity matrix. The resulting matrix on the right will be the inverse.


Section B


4. Define the terms approximate error and relative approximate error? Discuss the working of the Half Interval method for finding the roots of non-linear equation.

  • Approximate Error: The difference between successive approximations.

    Eapprox=xn+1xnE_{\text{approx}} = |x_{n+1} - x_n|
  • Relative Approximate Error: The approximate error relative to the current approximation.

    Erel=xn+1xnxn+1E_{\text{rel}} = \frac{|x_{n+1} - x_n|}{|x_{n+1}|}

Half Interval Method (Bisection Method):

  1. Choose two initial guesses, x1x_1 and x2x_2, such that f(x1)f(x_1) and f(x2)f(x_2) have opposite signs.
  2. Compute the midpoint: xm=x1+x22x_m = \frac{x_1 + x_2}{2}.
  3. Evaluate f(xm)f(x_m). If f(xm)f(x_m) is sufficiently close to zero or the error is small, stop; otherwise, replace one of the guesses with xmx_m and repeat the process.

5. Construct Newton’s backward difference table for given data points and approximate the value of f(x) at x=45.

Given:

x={10,20,30,40,50},f(x)={0.985,0.934,0.866,0.766,0.643}x = \{10, 20, 30, 40, 50\}, \quad f(x) = \{0.985, 0.934, 0.866, 0.766, 0.643\}
  1. Construct the backward difference table by calculating the differences.
  2. Use the formula for backward interpolation: P(x)=f(x0)+(xx0)Δf(x0)+P(x) = f(x_0) + (x - x_0)\Delta f(x_0) + \dots
  3. Use the table to find the value of f(x)f(x) at x=45x = 45.

6. Fit the quadratic curve through the following data points and estimate the value of f(x) at x=2.

Given points:

(x,y)={(1,2),(3,7),(4,8),(5,5)}(x, y) = \{(1, 2), (3, 7), (4, 8), (5, 5)\}
  1. Use the method of least squares to fit the quadratic curve y=ax2+bx+cy = ax^2 + bx + c.
  2. Solve for aa, bb, and cc using the normal equations.
  3. Substitute x=2x = 2 to estimate the value of f(x)f(x).

7. Factorize the following matrix using the Cholesky method.

This question would require specific matrix data to perform the Cholesky decomposition.


8. How can we calculate derivatives of discrete (tabulated) functions? Write down its algorithm.

  1. Forward Difference:

    f(x)=f(x+h)f(x)hf'(x) = \frac{f(x+h) - f(x)}{h}
  2. Backward Difference:

    f(x)=f(x)f(xh)hf'(x) = \frac{f(x) - f(x-h)}{h}
  3. Central Difference:

    f(x)=f(x+h)f(xh)2hf'(x) = \frac{f(x+h) - f(x-h)}{2h}

9. Find the following integral using composite trapezoidal rule for k=2k=2 and k=4k=4.

This would require applying the trapezoidal rule to the integral:

24(x3+2)dx\int_2^4 (x^3 + 2) \, dx

Use the formula for the composite trapezoidal rule to compute the value of the integral for the

given segments.


10. Approximate the solution of y=3x2,y(1)=1y' = 3x^2, y(1) = 1 using Taylor’s series method using the first four terms. Approximate the value of y(2)y(2).

Use the Taylor series expansion:

y(x)=y(x0)+(xx0)y(x0)+(xx0)22!y(x0)+y(x) = y(x_0) + (x - x_0)y'(x_0) + \frac{(x - x_0)^2}{2!}y''(x_0) + \dots

Substitute the known values to compute y(2)y(2).


11. Solve the Poisson’s equation 2f=xy\nabla^2 f = xy and f=2f=2 on boundary by assuming square domain 0x30 \leq x \leq 3 and 0y30 \leq y \leq 3 and h=1h = 1.

You can apply finite difference methods to solve this. Using a grid with spacing h=1h = 1, compute the values at each grid point and solve iteratively.


12. Write down the program for solving ordinary differential equation using Heun’s method.

Heun’s method is an improved version of Euler’s method. The algorithm is:

  1. Compute the intermediate slope: k1=f(xn,yn)k_1 = f(x_n, y_n) k2=f(xn+h,yn+hk1)k_2 = f(x_n + h, y_n + h \cdot k_1)
  2. Update the value: yn+1=yn+h2(k1+k2)y_{n+1} = y_n + \frac{h}{2}(k_1 + k_2)

This can be implemented in a programming language such as Python.




 Topic: Past Questions 2079 Solutions

Section A


1. How secant method can approximate the root of a non-linear equation? Explain with necessary derivation. Estimate a real root of the following equation using the secant method. Assume error precision of 0.01.

Given the equation:

x3+2xcos(x)=4x^3 + 2x - \cos(x) = 4

Secant Method:

The Secant method is used to find the roots of a non-linear equation without requiring the computation of derivatives. The Secant method formula is:

xn+1=xnf(xn)(xnxn1)f(xn)f(xn1)x_{n+1} = x_n - \frac{f(x_n)(x_n - x_{n-1})}{f(x_n) - f(x_{n-1})}

Where:

  • f(x)f(x) is the function whose root we want to find.
  • xnx_n and xn1x_{n-1} are two initial guesses.

Steps to Apply Secant Method:

  1. Define the function:

    f(x)=x3+2xcos(x)4f(x) = x^3 + 2x - \cos(x) - 4
  2. Choose initial guesses: Let's take x0=1x_0 = 1 and x1=2x_1 = 2.

  3. Compute x2x_2 using the Secant formula:

    x2=x1f(x1)(x1x0)f(x1)f(x0)x_2 = x_1 - \frac{f(x_1)(x_1 - x_0)}{f(x_1) - f(x_0)}
  4. Repeat until the error precision is less than 0.01.

Iteration 1:

f(1)=13+2(1)cos(1)4=0.4597f(1) = 1^3 + 2(1) - \cos(1) - 4 = -0.4597 f(2)=23+2(2)cos(2)4=4.7638f(2) = 2^3 + 2(2) - \cos(2) - 4 = 4.7638 x2=2(4.7638)(21)4.7638(0.4597)=1.4236x_2 = 2 - \frac{(4.7638)(2 - 1)}{4.7638 - (-0.4597)} = 1.4236

Iteration 2:

f(2)=4.7638,f(1.4236)=1.42363+2(1.4236)cos(1.4236)4=0.5723f(2) = 4.7638, \quad f(1.4236) = 1.4236^3 + 2(1.4236) - \cos(1.4236) - 4 = 0.5723 x3=1.4236(0.5723)(1.42362)0.57234.7638=1.3919x_3 = 1.4236 - \frac{(0.5723)(1.4236 - 2)}{0.5723 - 4.7638} = 1.3919

Continue until the error between successive approximations is less than 0.01.


2. How spline interpolation differs from Lagrange’s interpolation? Estimate the value of f(0)f(0) and f(4)f(4) using cubic spline interpolation from the following data.

Spline Interpolation vs Lagrange Interpolation:

  • Lagrange Interpolation: This method constructs a polynomial that passes through all the data points. The polynomial has degree n1n-1 where nn is the number of data points.

  • Spline Interpolation: It uses piecewise polynomials (splines) to fit the data. The most common type is cubic splines, where each piece of the polynomial is of degree 3. The advantage is that it avoids large oscillations that high-degree polynomials in Lagrange interpolation might produce.

Given Data Points:

x=[1,1,2,3],f(x)=[10,2,14,86]x = [-1, 1, 2, 3], \quad f(x) = [-10, -2, 14, 86]

To estimate f(0)f(0) and f(4)f(4), we need to construct the cubic spline and use it to find these values. This typically involves:

  1. Solving a system of linear equations to determine the coefficients of the cubic splines.
  2. Using the spline formula to compute the values of f(0)f(0) and f(4)f(4).

(For simplicity, you can use numerical software or tools like MATLAB or Python to construct and solve the cubic spline system.)


3. What is pivoting? Why is it necessary? Write an algorithm and program to solve the set of nn linear equations using the Gaussian elimination method.

Pivoting: Pivoting refers to the process of selecting the largest possible element (in absolute value) from the current column during Gaussian elimination. This is done to improve the numerical stability of the algorithm.

  • Partial Pivoting: Choosing the largest element in the current column as the pivot.
  • Complete Pivoting: Choosing the largest element in the remaining matrix (both row and column).

Algorithm for Gaussian Elimination:

  1. For each column, find the pivot element.
  2. Swap rows to place the pivot element in the current column.
  3. Perform row elimination to form an upper triangular matrix.
  4. Use back substitution to find the solution.

Section B


4. Calculate a real root of the following function using the bisection method correct up to 3 significant figures.

Given the function:

f(x)=x2ex3f(x) = x^2 - e^{-x} - 3
  • Bisection Method works by repeatedly halving an interval and selecting the subinterval in which the function changes sign. The root is approximated as the midpoint of the interval.

Steps:

  1. Choose initial guesses, say x1=1x_1 = 1 and x2=2x_2 = 2.
  2. Compute f(x1)f(x_1) and f(x2)f(x_2).
  3. If f(x1)f(x_1) and f(x2)f(x_2) have opposite signs, compute the midpoint xmx_m and evaluate f(xm)f(x_m).
  4. Repeat until the error is less than 0.001 (3 significant figures).

5. What is the fixed-point iteration method? How can it converge to the root of a non-linear equation? Also, explain the diverging cases with suitable examples.

Fixed-Point Iteration Method: This method involves rearranging the original equation into the form:

x=g(x)x = g(x)

Then, starting with an initial guess x0x_0, iterate using:

xn+1=g(xn)x_{n+1} = g(x_n)

Convergence: The method converges if the function g(x)g(x) satisfies the condition g(x)<1|g'(x)| < 1 near the root.

Divergence Cases: If g(x)1|g'(x)| \geq 1, the method will not converge and may even diverge.

Example of Divergence: If g(x)=x2g(x) = x^2, starting with x0=0.5x_0 = 0.5, the method will diverge because g(x)=2|g'(x)| = 2.


6. Write down the program for solving ordinary differential equation using Heun’s method.

Heun’s method is a type of predictor-corrector method. The steps are:

  1. Compute the predictor value using Euler’s method: yn+1(p)=yn+hf(xn,yn)y_{n+1}^{(p)} = y_n + h \cdot f(x_n, y_n)
  2. Compute the corrector value using the average of slopes: yn+1=yn+h2(f(xn,yn)+f(xn+1,yn+1(p)))y_{n+1} = y_n + \frac{h}{2} \left( f(x_n, y_n) + f(x_{n+1}, y_{n+1}^{(p)}) \right)

7. Fit the quadratic function for the data given below using the least square method.

Given the data points:

x=[1,1.5,2,2.5,3,3.5,4],f(x)=[2.74,5.88,8.31,11.21,15.19,19,23.5]x = [1, 1.5, 2, 2.5, 3, 3.5, 4], \quad f(x) = [2.74, 5.88, 8.31, 11.21, 15.19, 19, 23.5]

To fit a quadratic curve f(x)=ax2+bx+cf(x) = ax^2 + bx + c using the least square method:

  1. Set up the normal equations for aa, bb, and cc.
  2. Solve the system of equations to find the values of aa, bb, and cc.
  3. Use the obtained quadratic equation to predict values for f(x)f(x).

8. Estimate the integral value of the following function from x=1.2x = 1.2 to x=2.4x = 2.4 using Simpson’s 1/3 rule.

Given:

x=[1.0,1.2,1.4,1.6,1.8,2.0,2.2,2.4],f(x)=[1.53,2.25,3.18,4.32,5.67,7.23,8.98,10.94]x = [1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4], \quad f(x) = [1.53, 2.25, 3.18, 4.32, 5.67, 7.23, 8.98, 10.94]

Simpson’s 1/3 rule is given by:

I = \frac{h}{3} \left( f(x_0) + 4 \sum_{\text{odd } i} f(x_i) + 2 \sum_{\text{even } i} f(x_i)
  • f(x_n) \right) ] Where h=banh = \frac{b - a}{n}.


 Topic: Past Questions 2078 Solutions

Let's go ahead and solve each of the questions one by one in detail:


Section A - Attempt Any Two Questions


1. Horner's Rule for Polynomial Evaluation

Question: How can Horner's rule be used to evaluate the f(x)f(x) and f(x)f'(x) of a polynomial at a given point? Explain and write an algorithm and program to calculate a real root of a polynomial using Horner’s rule.

Explanation:

Horner’s rule is an efficient method for evaluating a polynomial and its derivative. A polynomial of degree nn is expressed as:

f(x)=anxn+an1xn1++a1x+a0f(x) = a_n x^n + a_{n-1} x^{n-1} + \dots + a_1 x + a_0

In Horner’s form, this is rewritten as:

f(x)=(((anx+an1)x+an2)x++a1)x+a0f(x) = (((a_n x + a_{n-1}) x + a_{n-2}) x + \dots + a_1) x + a_0

This reduces the number of multiplications and additions needed to evaluate the polynomial.

Algorithm:

  1. Input: Coefficients of the polynomial an,an1,,a0a_n, a_{n-1}, \dots, a_0, value xx.
  2. Initialize: Set f(x)=anf(x) = a_n.
  3. Iterate: For each coefficient, update f(x)f(x) as f(x)=f(x)x+aif(x) = f(x) \cdot x + a_i.
  4. Output: The value of f(x)f(x).

Program (Python Example):

def horner(coeffs, x):
    result = coeffs[0]
    for i in range(1, len(coeffs)):
        result = result * x + coeffs[i]
    return result

# Example Polynomial: 3x^3 - 5x^2 + 2x - 7
coeffs = [3, -5, 2, -7]
x = 2
print("f(x) at x =", x, "is:", horner(coeffs, x))

To Solve for the Root of a Polynomial:

You can use a root-finding method like Newton-Raphson combined with Horner's Rule to evaluate both the function and its derivative efficiently. In general, we will iterate until we get a root with the desired precision.


2. Matrix Factorization to Solve Linear Equations

Question: Write matrix factorization and solve the system of linear equations Ax=bA x = b for AA and bb given below using LU Decomposition.

Given:

A=[1232813236],b=[4128]A = \begin{bmatrix} 1 & 2 & 3 \\ 2 & 8 & 13 \\ 2 & 3 & 6 \end{bmatrix}, \quad b = \begin{bmatrix} 4 \\ 12 \\ 8 \end{bmatrix}

We need to perform LU Decomposition where A=LUA = LU, and solve the system Ax=bA x = b for xx.

Steps for LU Decomposition:

  1. Decompose AA into LL and UU: The matrix AA is decomposed into a lower triangular matrix LL and an upper triangular matrix UU.

    A=[1232813236]A = \begin{bmatrix} 1 & 2 & 3 \\ 2 & 8 & 13 \\ 2 & 3 & 6 \end{bmatrix}

    The LU decomposition process involves solving for LL and UU, using Gaussian elimination without pivoting.

  2. Solve Lz=bL z = b for zz: Using forward substitution to solve for zz.

  3. Solve Ux=zU x = z for xx: Using backward substitution to solve for xx.

Solution:

This step is usually performed using a Gaussian elimination method, which is implemented numerically in most scientific computing tools. Here's a general approach to solve it:

  • Compute LL and UU (LU Decomposition).
  • Solve Lz=bLz = b using forward substitution.
  • Solve Ux=zUx = z using backward substitution.

3. Solving a Higher-Order Differential Equation

Question: Solve the higher-order differential equation:

d2ydx2+3dydx+5y=0,y(1)=1,y(1)=2,using step size h=0.25.\frac{d^2y}{dx^2} + 3 \frac{dy}{dx} + 5y = 0, \quad y(1) = 1, \quad y'(1) = 2, \quad \text{using step size } h = 0.25.

We will solve this second-order ODE using Euler's method for numerical approximation. The equation is already in a form suitable for splitting into a system of first-order equations:

Let:

  • v=dydxv = \frac{dy}{dx}, so dvdx=d2ydx2\frac{dv}{dx} = \frac{d^2y}{dx^2}.

The system of equations is:

dydx=v\frac{dy}{dx} = v dvdx=3v5y\frac{dv}{dx} = -3v - 5y

Steps:

  1. Initialize with the initial values y(1)=1y(1) = 1 and v(1)=2v(1) = 2.
  2. Use Euler’s method to iteratively solve for yy and vv at each step xnx_n.

Euler’s method:

yn+1=yn+hvny_{n+1} = y_n + h \cdot v_n vn+1=vn+h(3vn5yn)v_{n+1} = v_n + h \cdot (-3v_n - 5y_n)

Iteration:

Let’s calculate step by step:

Initial conditions: y0=1y_0 = 1, v0=2v_0 = 2, h=0.25h = 0.25.

  1. Step 1:

    • y1=y0+hv0=1+0.252=1.5y_1 = y_0 + h \cdot v_0 = 1 + 0.25 \cdot 2 = 1.5
    • v1=v0+h(3v05y0)=2+0.25(3(2)5(1))=2+0.25(65)=22.75=0.75v_1 = v_0 + h \cdot (-3v_0 - 5y_0) = 2 + 0.25 \cdot (-3(2) - 5(1)) = 2 + 0.25 \cdot (-6 - 5) = 2 - 2.75 = -0.75
  2. Step 2:

    • y2=y1+hv1=1.5+0.25(0.75)=1.50.1875=1.3125y_2 = y_1 + h \cdot v_1 = 1.5 + 0.25 \cdot (-0.75) = 1.5 - 0.1875 = 1.3125
    • v2=v1+h(3v15y1)=0.75+0.25(3(0.75)5(1.5))=0.75+0.25(2.257.5)=0.75+0.25(5.25)=0.751.3125=2.0625v_2 = v_1 + h \cdot (-3v_1 - 5y_1) = -0.75 + 0.25 \cdot (-3(-0.75) - 5(1.5)) = -0.75 + 0.25 \cdot (2.25 - 7.5) = -0.75 + 0.25 \cdot (-5.25) = -0.75 - 1.3125 = -2.0625

Repeat these steps for the desired range to approximate the solution.




Section B - Attempt Any Eight Questions


4. Half-Interval Method

We are tasked with solving the following equation using the Half-Interval Method (Bisection Method):

x2exx=1x^2 - e^{-x} - x = 1

To apply the Half-Interval Method:

  1. Define the function:

    f(x)=x2exx1f(x) = x^2 - e^{-x} - x - 1
  2. Choose an initial interval [x1,x2][x_1, x_2], say, x1=0x_1 = 0 and x2=1x_2 = 1.

  3. Check the signs of f(x1)f(x_1) and f(x2)f(x_2):

    • f(x1)=f(0)=02e001=1f(x_1) = f(0) = 0^2 - e^0 - 0 - 1 = -1
    • f(x2)=f(1)=12e111=10.367911=1.3679f(x_2) = f(1) = 1^2 - e^{-1} - 1 - 1 = 1 - 0.3679 - 1 - 1 = -1.3679

    Since f(x1)f(x_1) and f(x2)f(x_2) have opposite signs, there is a root between them.

  4. Iterate: Find the midpoint xm=x1+x22x_m = \frac{x_1 + x_2}{2}, and evaluate f(xm)f(x_m). Continue halving the interval until the error is sufficiently small.

Repeat this process until you reach the desired precision.


5. Fixed-Point Iteration

We need to solve the following equation using Fixed-Point Iteration:

2x32x=52x^3 - 2x = 5
  1. Rearrange the equation into a form x=g(x)x = g(x):

    x=5+2x2x2x = \frac{5 + 2x}{2x^2}
  2. Choose an initial guess x0x_0, say x0=1x_0 = 1.

  3. Iterate:

    xn+1=5+2xn2xn2x_{n+1} = \frac{5 + 2x_n}{2x_n^2}

    Continue iterating until xn+1xn<ϵ|x_{n+1} - x_n| < \epsilon (desired precision).


6. Newton’s Interpolation

We are given the following data points and need to construct a Divided Difference Table to perform Newton’s interpolation:

x=[3.2,2.7,1.0,4.8,5.6]x = [3.2, 2.7, 1.0, 4.8, 5.6] f(x)=[2.0,17.8,14.2,8.35,1.7]f(x) = [2.0, 17.8, 14.2, 8.35, 1.7]
  1. Construct the divided difference table for the given data.
Divided Differencesf[x0]=f(x0),f[x0,x1]=f(x1)f(x0)x1x0,\text{Divided Differences} \quad f[x_0] = f(x_0), \quad f[x_0, x_1] = \frac{f(x_1) - f(x_0)}{x_1 - x_0}, \quad \dots
  1. Use the Newton’s interpolation formula to estimate f(x)f(x) at specific points.

The formula is:

Pn(x)=f[x0]+(xx0)f[x0,x1]+(xx0)(xx1)f[x0,x1,x2]+P_n(x) = f[x_0] + (x - x_0)f[x_0, x_1] + (x - x_0)(x - x_1)f[x_0, x_1, x_2] + \dots

Using this, you can estimate f(x)f(x) at x=2x = 2 and x=5x = 5.


7. Linear Regression

We need to fit a linear function to the following data using the Least Squares Method:

x=[1.0,1.2,1.4,1.6,1.8,2.0,2.2,2.4]x = [1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4] f(x)=[2.0,2.6,3.9,6.0,9.3,15.2,20.6,30.4]f(x) = [2.0, 2.6, 3.9, 6.0, 9.3, 15.2, 20.6, 30.4]
  1. Set up the system for linear regression: We want to fit a line y=mx+by = mx + b, where:

    m=nxiyixiyinxi2(xi)2m = \frac{n \sum x_i y_i - \sum x_i \sum y_i}{n \sum x_i^2 - (\sum x_i)^2} b=yimxinb = \frac{\sum y_i - m \sum x_i}{n}
  2. Solve the system to find the values of mm and bb that minimize the error.


8. Problems with Polynomial Interpolation

Polynomial interpolation can lead to problems, especially for large datasets. These problems include:

  1. Runge's Phenomenon: Polynomial interpolation can oscillate wildly at the ends of the interval when the degree of the polynomial increases.
  2. Overfitting: High-degree polynomials can fit the data perfectly but may not generalize well to new data points.

How to Address the Problems:

  • Piecewise Interpolation: Instead of fitting a single polynomial, use piecewise interpolation (e.g., Spline Interpolation) to avoid oscillations.
  • Least Squares Method: Fit a lower-degree polynomial that minimizes the error.

9. Romberg Integration

We need to evaluate the following integral using Romberg Integration:

I=01sin2(x)dxI = \int_0^1 \sin^2(x) \, dx

The Romberg integration formula is a recursive method that improves the Trapezoidal rule by considering successive refinements.

The method is given by:

Rk,0=12(ba)(f(a)+f(b))R_{k,0} = \frac{1}{2} \left( b - a \right) \left( f(a) + f(b) \right) Rk,j=4jRk+1,j1Rk,j14j1R_{k,j} = \frac{4^j R_{k+1,j-1} - R_{k,j-1}}{4^j - 1}

Continue until the desired accuracy is achieved.


10. Gauss-Jordan Method

We are given the following system of linear equations:

x12+2x23+3x34=9x_1^2 + 2x_2^3 + 3x_3^4 = 9 7x1+6x2+5x3+4x4=337x_1 + 6x_2 + 5x_3 + 4x_4 = 33 8x1+9x2+x4=278x_1 + 9x_2 + x_4 = 27 2x1+5x2+4x3+3x4=232x_1 + 5x_2 + 4x_3 + 3x_4 = 23
  1. Form the augmented matrix for the system.
  2. Perform Gaussian elimination with partial pivoting to reduce the system to an upper triangular matrix.
  3. Back-substitute to find the solution.

11. Heun’s Method for Differential Equations

We need to solve the differential equation:

y(x)+x2y=3x,with y(1)=1y'(x) + x^2y = 3x, \quad \text{with } y(1) = 1
  1. Write the equation in standard form:

    y=3xx2yy' = 3x - x^2y
  2. Apply Heun’s method:

    yn+1=yn+h2(f(xn,yn)+f(xn+h,yn+hf(xn,yn)))y_{n+1} = y_n + \frac{h}{2} \left( f(x_n, y_n) + f(x_n+h, y_n+h \cdot f(x_n, y_n)) \right)
  3. Iterate until you reach x=2x = 2, with h=0.25h = 0.25.


12. Poisson’s Equation

We are given the Poisson’s equation:

2f=3x2y\nabla^2 f = 3x^2y

on the square domain 0x30 \leq x \leq 3 and 0y30 \leq y \leq 3, with boundary conditions f=0f = 0 on the boundary.

This is a typical Finite Difference Method problem where the solution is approximated by discretizing the differential equation.

  • Use a grid with step size h=1h = 1 and solve for the interior points using the finite difference approximation.


 Topic: Syllabus

Course Description

Course Description 

This course covers solution of nonlinear equations, interpolation and approximation, numerical differentiation and integration and solution of linear algebraic equation, ordinary differential equations and partial diferential equations. it provides knowledge for numerical analysis.

Course Objectives 

The general objectives of this subject are to make students familiar with the theory of numerical analysis for solving algebraic and transcendental equations, solution of ordinary and partial differential equations, numerical diferentiation and integration.

 Unit Contents

1. Solution of Nonlinear Equations : 10 hrs

Introduction, Types of Equation, Errors in Computing, The Bisection Method; The Method of False Position, Newton- Raphson Method, Solution of System of Nonlinear Equation, Fixed Point Iteration and Convergence

2. Interpolation and Approximation : 10 hrs

Introduction, Errors in Polynomial Interpolation, Lagrange's Polynomials, Newton's Interpolation using Difference and Divided Differences, Cubic Spline Interpolation, Least Squares Method for Linear and Non-linear Data.

3. Numerical Differentiation and Integration  : 5 hrs

Introduction to Numerical Differentiation, Newton's Differentiation Formulas, Numerical Integration (Trapezoidal Rule, Simpson's 1/3 rule, 3/8 rule); Romberg Integration; Numerical Double Integration.

4. Solution of Linear Algebraic Equations  : 10 hrs

Review of the existence of solutions and properties of matrices. Consistency of a Linear System of Equations, Gaussian Elimination Method, Gauss-Jordan Method, Inverse of matrix using Gauss Elimination Method, Method of factorization, Iterative Methods(Jacobi & Gauss-Seidel Iteration),Power Method.

5. Solution of Ordinary Differential Equations  : 7 hrs

Introduction to Differential Equations, Initial Value Problem, Taylor Series Method, Picard's Method, Euler's Method and Its Accuracy, Heun's method, Runge-Kutta Methods, Solutions of Higher Order Equations, Boundary Value Problems, Shooting Method and Its Algorithm.

6. Solution of Partial Differential Equations : 5 hrs

Introduction to Partial Differential Equations, Deriving Differences Equations, Laplacian Equation and Poisson's Equation.

Laboratory Works 

Laboratory works will consist of program development and testing of Non-linear Equations, Interpolation, Numerical Differentiation and Integration, Linear Algebraic Equations, Ordinary and Partial Differential Equations using C or C+ I Builder.

 Text and Reference Books

Text Books

  1. C.F. Gerald and P.O. Wheatley, “Applied Numerical Analysis”, 4th.. Edition, Addison Wesley Publishing Company, New York
  2. S. S Sastry, “Introduction to Methods of Numerical Analysis”,- Prentice- Hall India

Reference Books


  1. W. Cheney and D. Kinciad, “Ntth’erical Mathematics. and Computing”, 2nd edition, Brooks/Cole Publishing Co., 1985
  2. W.H. Press, B.P. Flannery et. al., “Numerical Recipes in C”, ls Edition, Cambridge Press, 1998.

No comments:

Post a Comment

Popular Posts