Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore MCA645 CU-MCA-SEM-II-Statistical & Numerical Methods

MCA645 CU-MCA-SEM-II-Statistical & Numerical Methods

Published by kuljeet.singh, 2021-01-04 06:57:50

Description: MCA645 CU-MCA-SEM-II-Statistical & Numerical Methods

Search

Read the Text Version

Therefore, the root correct to 6 decimal places is r = 1.258092657. Example 2: Find a root of an equation f(x)=x3 – x – 1 using Newton Raphson method Solution: Here x3 – x –1=0 Let f(x) = x3 – x –1=0 ∴f ′(x) = 3x2 –1 Here 0 12 x -1 -1 5 f(x) Here f(1)= –1 < 0and f(2) = 5 > 0 ∴ Root lies between 1 and 2 x0 = = 1.5 1st iteration : f (x0) = f (1.5) = 0.875 f′(x0)=f′(1.5)=5.75 x1 = x0 – x1 = 1.5 – x1 = 1.34783 50 CU IDOL SELF LEARNING MATERIAL (SLM)

2nd iteration : 51 f(x1) = f(1.34783) = 0.10068 f′(x1) = f′(1.34783) = 4.44991 x2 = x1 – x2 = 1.34783– x2 =1.3252 3rd iteration: f(x2) = f (1.3252) = 0.00206 f′(x2) = f′ (1.3252) = 4.26847 x3 = x2 – x3= 1.3252 – x3=1.32472 4th iteration: f(x3) =f (1.32472) = 0 f′(x3) =f′ (1.32472) = 4.26463 x4 = x3 – x4= 1.32472 – CU IDOL SELF LEARNING MATERIAL (SLM)

x4 = 1.32472 Approximate root of the equation x3-x-1=0 using Newton Raphson method is 1.32472 Possible drawbacks: Newton’s method may not work in the following cases: i) The x-values may run away as in Figure 2.5(a). This might occur when the x-axis is an asymptote. ii) We might choose an x-value that when evaluated, the derivative gives us 0 as in Figure 2.5(b). The problem here is that we want the tangent line to intersect the x-axis so that we can approximate the root. If x has a horizontal tangent line, then we can't do this. iii) We might choose an x that is the beginning of a cycle as in Figure 2.5(c). Again it is hoped that the picture will clarify this. Fig 2.3: Divergence of Newton’s method However, the difficulties posed have one artificial. Normally, we do not encounter such problems in practice. Newton-Raphson methods is one of the powerful methods available for obtaining a simple root of f(x) = 0. 52 CU IDOL SELF LEARNING MATERIAL (SLM)

2.5 POLYNOMIAL EVALUATION Two common tasks associated with polynomials are evaluation and interpolation: given the polynomial find its values at certain arguments, and given the values at certain arguments find the polynomial. We consider Horner's rule for evaluation and the Newton divided difference polynomial for interpolation. Horner's Method The standard method for evaluating a polynomial is Horner's method (also known as Horner's rule and nested multiplication) which consists of the following recurrence: Consider the polynomial in the expanded form as: The algorithm proposed by this rule is based on evaluating the monomials formed above starting from the one in the inner-most parenthesis and move out to evaluate the monomials in the outer parenthesis. The algorithm is executed following the below steps: 1. Set k = n 2. Let bk = ak 3. Let bk - 1 = ak - 1 + bkx0 4. Let k = k - 1 5. If k ≥ 0 then go to step 3 Else End This algorithm can, also, be graphically visualized by considering the 5th degree polynomial given by: 53 CU IDOL SELF LEARNING MATERIAL (SLM)

f(x) = a0 + a1x + a2x2 + a3x3 + a4x4 + a5x5 which can be evaluated at x = x0 by re-arranging it as: f(x0) = a0 + x0(a1 + x0(a2 + x0(a3 + x0(a4 + a5x0)))) Another way to represent the results using this algorithm is through the tableau given below: K5 4 3 2 1 0 b3 = a3 + x0b4 b2 = a2 + x0b3 b1 = a1 + x0b2 b0 = a0 + x0b1 b5 = a5 b4 = a4 + x0b5 Example 1: Evaluate the polynomial f(x) = x4 + 3x3 + 5x2 + 7x + 9 at x = 2 Solution: Since the polynomial is of the 4th degree, then n = 4 K4 3 2 1 0 b2 = 5 + 2 * 5 b1 = 7 + 2 * 15 b0 = 9 + 2 * 37 Step b4 = 1 b3 = 3 + 2 * 1 15 37 83 Result 1 5 Therefore, f (2) = 83. Example 2. Use Horner’s method to evaluate (������) = 2������4 − 3������2 + 3������ − 4 at ������0 = −2. Solution: Step 1: ������4 = 2, ������4 = 2 Step 2: ������3 = 0, ������3 = ������3 + ������4 ������0 = 0 + 2 −2 = −4 Step 3: ������2 = −3, ������2 = ������2 + ������3 ������0 = −3 + −4 −2 = 5 Step 4: ������1 = 3, ������1 = ������1 + ������2 ������0 = 3 + 5 −2 = −7 Step 5: ������0 = −4, ������0 = ������0 + ������1 54 CU IDOL SELF LEARNING MATERIAL (SLM)

������0 = −4 + −7 −2 = 10 Thus ������ −2 = ������0 = 10 Remark: ������4������4 + ������3������3 + ������2������2 + ������1������ + ������0 = (((4������ + ������3) ������ + ������2) ������ + ������1) ������ + ������0  The number of ways of partition a set of n elements into k nonempty subsets is called the Stirling number of the second kind, denoted by S (n, k). In other words, S (n, k) is the number of equivalence relations with k classes on a finite set with n elements. From [3], S(n, k) equals As a division algorithm, Horner’s method is a nesting technique requiring only n multiplications and n additions to evaluate an arbitrary nth-degree polynomial, which can be surveyed by Horner’s theorem (see, for example, [1]). Theorem: Let The theorem can be proved using a direct calculation. An additional advantage of Horner’s method is the differentiation of P(x): P' (x) = Q(x) + (x − x0) Q' (x). 55 CU IDOL SELF LEARNING MATERIAL (SLM)

Hence, P' (x0) = Q(x0), which is very convenient when applying Newton’s method to find roots of a polynomial Example 3: As an example, we use Horner’s method to evaluate P(x) = x4 − 2x2 + 3x − 4 at x0 = −1. Solution: First, we construct the synthetic division as follows Hence, 2.6 BAIRSTOW’S METHOD Bairstow’s method is an algorithm used to find the roots of a polynomial of arbitrary degree (usually order 3 and higher). The method divides a polynomial by a quadratic function (x2 – rx – s), where r and s are guessed. The division gives us a new polynomial and the remainder R(x) = b1 (x – r) + b0 The quotient f n-2 (x) and the remainder R(x) are obtained by the standard polynomial division. The coefficients bi can be calculated by the following recurrence relationship 56 CU IDOL SELF LEARNING MATERIAL (SLM)

If (x2 – rx – s) is an exact factor of fn(x) then the remainder R(x) is zero and the roots of (x2 – rx – s) are the roots of fn(x) .The Bairstow’s method reduces to determining the value of r and s such that R(x) = 0, hence b0 = b1⟶ 0. Because b0 and b1 are functions of The Bairstow’s method reduces r and s, they can be expanded using Taylor series, By setting both equations equal to zero (a) (b) To solve the system of equations above, we need partial derivatives of b0 and b1 with respect to r and s. Bairstow showed that these partial derivatives can be obtained by a synthetic division of the b’s in a fashion similar to the way in which the b’s themselves were derived, that is by replacing ai ’s with bi ’s, and bi ’s with ci ’s, such that, where 57 Then substituting into equations (a) and (b) gives CU IDOL SELF LEARNING MATERIAL (SLM)

These equations can be solved for Δr and Δs, which can be used to improve the initial guess of r and s. At each step, an approximate error in r and s estimated as If and where is a stopping criterion, the values of the roots can be determined by (c) At this point, there exist three possibilities (1) If the quotient polynomial fn-2 is a third (or higher) where is a stopping order polynomial, the Bairstow’s method can be applied to the quotient to evaluate new values for r and s. The previous values of r and s can serve as the starting guesses (2) If the quotient polynomial fn-2 is a quadratic function, then use eqn. (c) to obtain the remaining two roots of fn(x). (3) If the quotient polynomial fn-2 is a linear function, then the single remaining root is given by At this point, there exist three possibilities (1) If the quotient polynomial fn-2 is a third (or higher) where is a stopping (2.16) order polynomial, the Bairstow’s method can be applied to the quotient to evaluate new values for r and s. The previous values of r and s can serve as the starting guesses. Example 1: Find all the roots of the polynomial 58 CU IDOL SELF LEARNING MATERIAL (SLM)

by Bairstow method . With the initial values Set iteration=1 Solution: Set iteration=1 Using the recurrence relations we get the simultaneous equations for and are: on solving we get and Set iteration=2 59 CU IDOL SELF LEARNING MATERIAL (SLM)

now we have to solve On solving we get now proceeding in the above manner in about ten iteration we get with Now on using we get 60 CU IDOL SELF LEARNING MATERIAL (SLM)

So at this point Quotient is a quadratic equation 61 CU IDOL SELF LEARNING MATERIAL (SLM)

62 CU IDOL SELF LEARNING MATERIAL (SLM)

63 CU IDOL SELF LEARNING MATERIAL (SLM)

2.7 SUMMARY The bisection method is the simplest one to solve the transcendental equation. The root-finding problem is one of the most important computational problems. It arises in a wide variety of practical applications in physics, chemistry, biosciences, engineering, etc. As a matter of fact, determination of any unknown appearing implicitly in scientific or engineering formulas gives rise to a root-finding problem. So, the above numerical methods help us to solve the root finding problems. Normally, when evaluating a polynomial at a certain value, we are used to substitute this value into the polynomial and do the math. We may also use or develop a mathematical computation application to do that, which is a necessity when we are dealing with complex polynomials with high degrees. The way the computer processes a problem depends, mainly, on how you, as a programmer, describe it to the computer. You may design your application to solve the polynomial by direct substitution or 64 CU IDOL SELF LEARNING MATERIAL (SLM)

using synthetic division according to Horner’s rule. The only difference between the two approaches is the speed at which the computer will solve the problem in either case. The advantage offered by Horner's rule is that it reduces the number of multiplication operations. Knowing that the computer processing time for a single multiplication process is 5 to 20 times the processing time of an addition process, you can tell that designing your application to evaluate the polynomial using Horner’s rule will have significant improvement on the execution time taken by the computer. 2.8 KEYWORDS  Binomial : A polynomial containing two terms, such as 2x−92x−9, is called a binomial  The Iterative Method: is a mathematical way of solving a problem which generates a sequence of approximations.  Iteration : refers to the technique that solve any linear system problems with successive approximation at each step  The bisection method: is used to find the roots of a polynomial equation. It separates the interval and subdivides the interval in which the root of the equation lies.  Polynomials are the expressions in Maths, that includes variables, coefficients and exponents. 2.9 LEARNING ACTIVITY 1. Find the iterative method based on Newton-Raphson method for finding and N1/3, where N is a positive real number. Apply the methods to N = 18 to obtain the results correct to two significant digits. ---------------------------------------------------------------------------------------------------------------- -------- ------------------------------------------------------------------------------------------------------------------------ 2. Use initial approximation to find a quadratic factor of the form of the polynomial equation using Bairstow method and hence find all its roots. 65 CU IDOL SELF LEARNING MATERIAL (SLM)

------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------ 2.10 UNIT END QUESTIONS A. Descriptive Type Questions 1. State the drawbacks of Newton Raphson Method 2. Compare Bisection and False Position Method 3. Discuss the concept of Polynomial Evaluation 4. State and explain Bairstow's Algorithm 5. State advantage and disadvantage of Bairstow's Method B. Multiple Choice Questions 1. Interpolation provides a mean for estimating functions a) At the beginning points b) At the ending points c) At the intermediate points d) None of the mentioned 2. Interpolation methods are a) Linear interpolation b) Piecewise constant interpolation c) Polynomial interpolation d) All of the mentioned 3. Which is more expensive? a) Polynomial interpolation b) Linear interpolation c) Polynomial & Linear interpolation d) None of the mentioned 4. Interpolation is a method of a) Interrelating b) Estimating c) Integrating d) Combining 5. Interpolation means a) Adding new data points 66 CU IDOL SELF LEARNING MATERIAL (SLM)

b) Only aligning old data points c) Only removing old data points d) None of the mentioned Answers: 1 – c; 2 - d; 3 - a; 4 –b; 5 - a; 2.11 REFERENCES  Rajaraman V., Computer Oriented Numerical Method. New Delhi: Prentice Hall.  Salaria R.S.A, Textbook of Statistical and Numerical Methods in Engineering, Delhi: Khanna Book Publishing Company.  Gupta S.P. and Kapoor, V.K. (2014). Fundamentals of Mathematical Statistics. Delhi: Sultan Chand and Sons.  Sujatha Sinha, Sushma Pradhan, Numerical Analysis and Statistical Methods, Academic Publishers.  Anderson (1990). Statistical Madeline York: McGraw Publishing House.  Gupta S.P. and Kapoor, V.K. (2015). Fundamentals of Applied statistics. Delhi: Sultan Chand & Sons.  Kendall E. Atkinson , An Introduction to Numerical Analysis, Wiley India Private Limited  Gupta Dey , Numerical Methods , McGraw Hill Education;  Numerical Methods & Analysis – Engineering App – Google Play store  https://en.wikibooks.org/wiki/Statistics/Numerical_Methods/Numerical_Comparison_of_Stati stical_Software 67 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 3- LINEAR EQUATIONS AND DIFFERENTIAL EQUATIONS 1: Structure 3.0 Learning Objective 3.1 Introduction 3.2 Gauss elimination method 3.3 Ill-conditioned equations 3.3.1 Tests for ill-conditioning* 3.4 Summary 3.5 Keywords 3.6 Learning Activity 3.7 Unit End Questions 3.8 References 3.0 LEARNING OBJECTIVE After studying this unit, you will be able to:  Comprehend Gauss elimination method  Explain the Ill-conditioned equations 3.1 INTRODUCTION Systems of linear equations arose in Europe with the introduction in 1637 by René Descartes of coordinates in geometry. In fact, in this new geometry, now called Cartesian geometry, lines and planes are represented by linear equations, and computing their intersections amounts to solving systems of linear equations. Many people have contributed to Gaussian elimination, including Carl Friedrich Gauss. His method for calculating a special case was adopted by professional hand computers in the nineteenth century. Confusion about the history eventually made Gauss not only the namesake but also the originator of the subject. We may write Gaussian elimination to honour him without intending an attribution. After the name \"Gaussian\" had been already established in the 1950's, the Gaussian-Jordan term was adopted when geodesist W. Jordan improved the technique so he could use such calculations to process his observed land surveying data 68 CU IDOL SELF LEARNING MATERIAL (SLM)

3.2 GAUSS ELIMINATION METHOD: DEFINITION (Forward/Gauss Elimination Method) Gaussian elimination is a method of solving a linear system (consisting of equations in unknowns) by bringing the augmented matrix to an upper triangular form This elimination process is also called the forward elimination method. The following examples illustrate the Gauss elimination procedure. Concept: Gaussian elimination is the name of the method we use to perform the three types of matrix row operations on an augmented matrix coming from a linear system of equations in order to find the solutions for such system. This technique is also called row reduction and it consists of two stages: Forward elimination and back substitution. These two Gaussian elimination method steps are differentiated not by the operations you can use through them, but by the result they produce. The forward elimination step refers to the row reduction needed to simplify the matrix in question into its echelon form. Such stage has the purpose to demonstrate if the system of equations portrayed in the matrix have a unique possible solution, infinitely many solutions or just no solution at all. If found that the system has no solution, then there is no reason to continue row reducing the matrix through the next stage. If is possible to obtain solutions for the variables involved in the linear system, then the Gaussian elimination with back substitution stage is carried through. This last step will produce a reduced echelon form of the matrix which in turn provides the general solution to the system of linear equations. 69 CU IDOL SELF LEARNING MATERIAL (SLM)

The Gaussian elimination rules are the same as the rules for the three elementary row operations, in other words, you can algebraically operate on the rows of a matrix in the next three ways (or combination of): 1. Interchanging two rows 2. Multiplying a row by a constant (any constant which is not zero) 3. Adding a row to another row And so, solving a linear system with matrices using Gaussian elimination happens to be a structured, organized and quite efficient method. Example 1: Solve the following system of Linear Equation using Gauss Elimination Method Equation 1: System of linear equations to solve Solution: We know from our lesson on representing a linear system as a matrix that we can represent such system as an augmented matrix like the one below Equation 2: Transcribing the linear system into an augmented matrix Let us row-reduce (use Gaussian elimination) so we can simplify the matrix: 70 CU IDOL SELF LEARNING MATERIAL (SLM)

Equation 3: Row reducing (applying the Gaussian elimination method to) the augmented matrix Resulting in the matrix: Equation 4: Reduced matrix into its echelon form Notice that at this point, we can observe that this system of linear equations is solvable with a unique solution for each of its variables. What we have performed so far is the first stage of row reduction: Forward elimination. We can continue simplifying this matrix even more (which would take us to the second stage of back substitution) but we really don't need to since at this point the system is easily solvable. Thus, we look at the resulting system to solve it directly: Equation 5: Resulting linear system of equations to solve 71 CU IDOL SELF LEARNING MATERIAL (SLM)

From this set, we can automatically observe that the value of the variable z is: z=-2. We use this knowledge to substitute it on the second equations to solve for y, and the substitute both y and z values on the first equations to solve for x. Applying the values of y and z to the first equation Equation 6: Solving the resulting linear system of equations And the final solution for the system is: Equation 7: Final solution to the system of linear equations for example 1 EXAMPLE 2: Solve the linear system by Gauss elimination method. Solution: In this case, the augmented matrix is 72 The method proceeds along the following steps. 1. Interchange and equation (or R12). CU IDOL SELF LEARNING MATERIAL (SLM)

2. Divide the 1st equation by (or R1(1/2)). 3. Add -1 times the 1st equation to the 3rd equation (or R31(-1) ) 4. Add -1 times the 2nd equation to the 3rd equation (or R32(-1) ) 5. Multiply the equation by -2/3 (or R3(-2/3) ) The last equation gives z = 1 the second equation now gives y = 1. Finally, the first equation gives x =1 Hence the set of solutions is (x, y, z)t = (1, 1, 1)t , A UNIQUE SOLUTION. EXAMPLE 3: Solve the linear system by Gauss elimination method. 73 CU IDOL SELF LEARNING MATERIAL (SLM)

Solution: In this case, the augmented matrix is and the method proceeds as follows: 1. Add -1 times the first equation to the second equation. 2. Add -3 times the first equation to the third equation. 3. Add -1 times the second equation to the third equation Thus, the set of solutions is with arbitrary. In other words, the system has INFINITE NUMBER OF SOLUTIONS. Difference between Gaussian elimination and Gauss Jordan elimination The difference between Gaussian elimination and the Gaussian Jordan elimination is that one produces a matrix in row echelon form while the other produces a matrix in row reduced echelon form. A row echelon form matrix has an upper triangular composition where any zero rows are at the bottom and leading terms are all to the right of the leading term from the row above. Reduced 74 CU IDOL SELF LEARNING MATERIAL (SLM)

echelon form goes beyond by simplifying much more (sometimes even reaching the shape of an identity matrix). Advantages of Gaussian elimination: This method is completely fair and dependable. It can solve more than 2 linear equations simultaneously. Disadvantages of Gaussian elimination: This method is very slow procedure because of this it takes time. There are various drawbacks while using elimination methods using computers o Theoretically, the elimination methods should give actual solution. o However in computers we declare or assign variables with only certain precision (say single precision, double precision, etc.) Due to this there are drawbacks like: o Accumulation of round-off errors. o Failure in solving ill-conditioned systems 3.3 ILL-CONDITIONED EQUATIONS: A mathematical problem or series of equations is ill-conditioned if a small change in the independent variable (input) leads to a large change in the dependent variable (output). Certain systems of linear equations are such that their solutions are very sensitive to small changes (and therefore to errors) in their coefficients and constants. We give an example below in which 1 % changes in two coefficients change the solution by a factor of 10 or more. Such systems are said to be ill-conditioned. If a system is ill-conditioned, a solution obtained by a numerical method may be differ greatly from the exact solution, even though great care is taken to keep round-off and other errors very small. As an example, consider the system of equations: 75 CU IDOL SELF LEARNING MATERIAL (SLM)

which has the exact solution x =1, y = 2. Changing coefficients of the second equation by 1% and the constant of the first equation by 5% yields the system: It is easily verified that the exact solution of this system is x = 11, y = -18.2. This solution differs greatly from the solution of the first system. Both these systems are said to be ill-conditioned. If a system is ill-conditioned, then the usual procedure of checking a numerical solution by calculation of the residuals may not be valid. In order to see why this is so, suppose we have an approximation X to the true solution x. The vector of residuals r is then given by r = b - AX = A(x - X). Thus e = x - X satisfies the linear system Ae = r. In general, r will be a vector with small components. However, in an ill-conditioned system, even if the components of r are small so that it is `close' to, the solution of the linear system Ae = r could differ greatly from the solution of the system Ae = 0, namely 0. It then follows that X may be a poor approximation to x despite the residuals in r being small. Obtaining accurate solutions to ill-conditioned linear systems can be difficult, and many tests have been proposed for determining whether or not a system is ill-conditioned. 3.3.1 Tests for ill-conditioning* We recall from above section that the solutions of ill-conditioned systems of linear equations very sensitive to small changes in their coefficients and constants. Norms One of the most common tests for ill-conditioning of a linear system involves the of the coefficient matrix. In order to define this quantity, we need consider first the concept of the norm of a vector or matrix, which in some way measures the size of their elements. Let x and y be vectors. Then a vector norm is a real number with the properties: 1. if and only if x is a vector with all components zero; 2. for any real number ; 3. (triangle inequality). There are many possible ways to choose a vector norm with such properties. One vector norm that is probably familiar to the student is the Euclidean or 2-norm. Thus, if x is an n x 1 vector, then the 2- norm is denoted and defined by As an example, if x is the 5 x 1 vector with the components x1 = 1, x2 = -3, x3 = 4, x4 = -6, x5 = 2, then . 76 CU IDOL SELF LEARNING MATERIAL (SLM)

Another possible choice of norm, which is more suitable for our purposes here, is the infinity norm, defined by . Thus, we find for the vector in the previous example . It is easily verified that has the three properties above. For , the first two properties are readily verified; the triangle inequality is somewhat more difficult and requires the use of the so-called Cauchy-Schwarz inequality. The defining properties of a matrix norm are similar, except that there is an additional property. Let A and M be matrices. Then a matrix norm is a real number with the properties: 4. is a matrix with all elements zero; 5. 6. 7. As for vector norms, there are many ways of choosing matrix norms with the four properties above, but we will consider only the infinity norm. If A is an n x n matrix, then the infinity norm is defined by By this definition, this norm is the maximum of the sums obtained by adding the absolute values of the elements in each row, whence it is commonly referred to as the maximum row sum norm. As an example, let . Then so that A useful property relating the matrix and vector infinity norms is which follows from Property 4 of matrix norms above. Ill-conditioning tests We will now find out whether or not a system is ill-conditioned, by using the condition number of the coefficient matrix. If A is an n x n matrix and A-1 is its inverse, then the condition number of A is defined by 77 CU IDOL SELF LEARNING MATERIAL (SLM)

The condition number is bounded below by 1, since and where we have used the matrix norm property 4 given in the preceding section. Large values of the condition number usually indicate ill-conditioning. As a justification for this last statement, we state and prove the following theorem. Theorem: Suppose x satisfies the linear system Ax = b and satisfies the linear system . Then Proof: We have . Since , or It then follows that we see that However, since b = Ax, we have from which the result follows. It is seen from the theorem that even if the difference between b and is small, the change in the solution, as measured by the `relative error' may be large when the condition number is large. It follows that a large condition number is an indication of possible ill-conditioning of a system. A similar theorem for the case when there are small changes to the coefficients of A may be found in more advanced texts such as Atkinson (1993). Such a theorem also shows that a large condition number is an indicator of ill-conditioning. The question then arises as to how large the condition number has to be for ill-conditioning to be a problem. Roughly speaking, if the condition number is 1m and the machine being used to solve a linear system has kD accuracy, then the solution of the linear system will be accurate to k - m decimal digits. In Step 12 we had the coefficient matrix 78 CU IDOL SELF LEARNING MATERIAL (SLM)

which was associated with an ill-conditioned system. Then and This suggests that a numerical solution would not be very accurate, if only 2D accuracy were used in the calculations. Indeed, if the components of A were rounded to two decimal digits, the two rows of A would be identical. Then the determinant of A would be zero, and it follows from the theorem in Step 11 that this system would not have a unique solution. We recall that the definition of the condition number requires A-1, the computation of which is expensive. Moreover, even if the inverse were calculated, this approximation might not be very accurate, if the system is ill-conditioned. It is therefore common in software packages to estimate the condition number by obtaining an estimate of , without explicitly finding A-1. 3.4 SUMMARY An equation of the form ax + by + c = 0, where a, b and c are real numbers, such that a and b are not both zero, is called a linear equation in two variables. Here a and b are called coefficients of x and y respectively and c is called constant term. The equation is called linear because the equation is of the first degree. A solution is an ordered pair of real numbers which satisfies the equation. A linear equation in two variables has infinitely many solutions. Gaussian method is being used in channel decoding algorithm. Gaussian filter is used to enhance the image. The Gaussian method is also used in scheduling algorithms. 3.5 KEYWORDS  Gaussian elimination: is a method of solving a linear system (consisting of equations in unknowns) by bringing the augmented matrix  ill-conditioned Equation : A mathematical problem or series of equations is ill-conditioned if a small change in the independent variable (input) leads to a large change in the dependent variable (output)  Norm: In mathematics, a norm is a function from a vector space over the real or complex numbers to the nonnegative real numbers, that satisfies certain properties pertaining to scalability and additivity and takes the value zero only if the input vector is zero  Math computation: skills comprise what many people refer to as basic arithmetic: addition, subtraction, multiplication and division  Condition Number : the condition number of a function measures how much the output value of the function can change for a small change in the input argument 79 CU IDOL SELF LEARNING MATERIAL (SLM)

3.6 LEARNING ACTIVITY: 1. Solve the linear system by Gauss elimination method. --------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------- 2. Is it true to say that an ill-conditioned system has not got an exact solution? --------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------- -------------------------- 3.7 UNIT END QUESTIONS A. Descriptive Questions 1. What is Gauss Elimination Method 2. Explain the steps of Gauss Elimination Method with example 3. What are the three properties of a vector norm? 4. What are the four properties of a matrix norm? 5. How is the condition number of a matrix defined and how is it used as a test for ill- conditioning? B. Multiple Choice Questions 1. Solve the following equations by Gauss Elimination Method. x+4y-z = -5 x+y-6z = -12 3x-y-z = 4 a) x = 1.64791, y = 1.14085, z = 2.08451 b) x = 1.65791, y = 1.14185, z = 2.08441 80 CU IDOL SELF LEARNING MATERIAL (SLM)

c) x = 1.64691, y = 1.14095, z = 2.08461 d) x = 1.64491, y = 1.15085, z = 2.09451 2. Solve the given system of equation by Gauss Elimination method. 3x + 4y – z = -6 -2y + 10z = -8 4y – 2z = -2 a) (-2, -1, -1) b) (-1, -2, -1) c) (-1, -1, -2) d) (-1, -1, -1) 3. Find the values of x, y, z in the following system of equations by gauss Elimination Method. 2x + y – 3z = -10 -2y + z = -2 z=6 a) 2, 4, 6 b) 2, 7, 6 c) 3, 4, 6 d) 2, 4, 5 4. The following system of equation has: 81 x–y–z=4 2x – 2y – 2z = 8 5x – 5y – 5z = 20 CU IDOL SELF LEARNING MATERIAL (SLM)

a) Unique Solution b) No solution c) Infinitely many Solutions d) Finite solutions 5. Solve this system of equations and comment on the nature of the solution using Gauss Elimination method. x+y+z=0 -x – y + 3z = 3 -x – y – z = 2 a) Unique Solution b) No solution c) Infinitely many Solutions d) Finite solutions Answers: 1 – a; 2 - d; 3 - a; 4 - c; 5 - b; 3.8 REFERENCES  Rajaraman V., Computer Oriented Numerical Method. New Delhi: Prentice Hall.  Salaria R.S.A, Textbook of Statistical and Numerical Methods in Engineering, Delhi: Khanna Book Publishing Company.  Sujatha Sinha, Sushma Pradhan, Numerical Analysis and Statistical Methods, Academic Publishers.  Anderson (1990). Statistical Modelling. New York: McGraw Publishing House.  J. H. Wilkinson , The Algebraic Eigenvalue Problem (Numerical Mathematics and Scientific Computation), Clarendon Press  Kendall E. Atkinson , An Introduction to Numerical Analysis, Wiley India Private Limited  Gupta Dey , Numerical Methods , McGraw Hill Education;  Numerical Methods & Analysis – Engineering App – Google Play store  https://en.wikibooks.org/wiki/Statistics/Numerical_Methods/Numerical_Comparison_of_Statistic al_Software 82 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 4- LINEAR EQUATIONS AND DIFFERENTIAL EQUATIONS 2 Structure 4.0 Learning Objective 4.1 Introduction 4.2 Taylors series and Euler methods 4.3 Runge-kutta methods 4.4 Predictor corrector methods. 4.5 Summary 4.6 Keywords 4.7 Learning Activity 4.8 Unit End Questions 4.9 References 4.0 LEARNING OBJECTIVE After studying this unit, you will be able to:  Comprehend Taylors series and Euler methods  Enumerate Runge-kutta methods  Understand the Predictor corrector methods. 4.1 INTRODUCTION Differential equations is a branch of mathematics that starts with one, or many, recorded observations of change, & ends with one, or many, functions that predict future outcomes. An algebraic equation, such as a quadratic equation, is solved with a value or set of values; a differential equation, by contrast, is solved with a function or a class of functions. “DFQ” for short, virtually all STEM undergraduate programs qualify it as a core requirement for a simple reason: DFQ is a fantastic tool for modelling situations in any field or industry. Numerical Solution of an ODE: The idea behind numerical solutions of a Differential Equation is to replace differentiation by differencing. A computer cannot differentiate but it can easily do a difference. (Differentiation is a continuous process. Differencing is a discrete process.) 83 CU IDOL SELF LEARNING MATERIAL (SLM)

4.2 TAYLORS SERIES AND EULER METHODS Taylor Series Expansion: If a function y(t) behaves nicely enough, then its Taylor series expansion converges: The Taylor series with remainder term is where τ is some value between t and t + ∆t. You can truncate this for any value of n. 84 Example 1: Solve the initial value problem y' = – 2 xy2, y (0) = 1 for y at x = 1 with step length using Taylor series method of order four. Solution: Given y' = f (x, y) = – 2xy2 ⇒ y'' = – 2y2 – 4xyy' y''' = – 8yy' – 4xy'2 – 4xyy'' yiv = – 12y'2 – 12yy'' – 12xy'y'' – 4xyy''' yv = – 48y'y'' – 16yy''' – 12xy''2 – 16xy'y''' – 4xyyiv The fourth order Taylor's formula is y(xi+h) = y(xi) + h y'(xi, yi) + h2 y''(xi, yi)/2! + h3 y'''(xi, yi)/3! + h4 yiv(xi, yi)/4! + . . . given at x=0, y=1 and with h = .2 we have y' = -2(0)(1)2 = 0.0 y'' = -2(1)2 - 4(0)(1)(0) = -2 y''' = -8(1)(0) - 4(0)(0)2 - 4(0)(1)(-2) = 0.0 yiv = -12(0)2 - 12(1)(-2) - 12(0)(0)(-2) - 4(0)(1)(0) = 24 y (0.2) = 1 + .2 (0) + .22 (-2)/2! + 0 + .24 (24)/4! = .9615 now at x = .2 we have y = .9615 y' = -0.3699, y'' = -1.5648, y''' = 3.9397 and yiv = 11.9953 CU IDOL SELF LEARNING MATERIAL (SLM)

then y(0.4) = 1 + .2 (-.3699) + .22 (-1.5648)/2! + .23(3.9397)/3! + .24 (11.9953)/4! = 0.8624 y(0.6) = 1 + .2 (-.5950) + .22 (-0.6665)/2! + .23 (4.4579)/3! + .24 (-5.4051)/4! = 0.7356 y (0.8) = 1 + .2 (-.6494) + .22 (-0.0642)/2! + .23 (2.6963)/3! + .24 (-10.0879)/4! = 0.6100 y (1.0) = 1 + .2 (-.5953) + .22 (-0.4178)/2! + .23 (0.9553)/3! + .24 (-6.7878)/4! = 0.5001 now at x = 1.0 we have y = .5001 y' = -0.5001, y'' = 0.5004, y''' = -.000525 and yiv = -3.0005 Error in the approximation E4 = h4 (y4(1.0) - y4(0.8))/ 5! = h4 (-3.0005-11.9953)/ 5! = -1.9994e-004 Analytical solution y(x) = 1/(1+x2) at x = 1, y = .5 Example 2: Using Taylor series method, find y (0.1) for y' = x - y2, y (0) = 1 correct up to four decimal places. Solution: Given y' = f (x, y) = x - y2 ⇒y'' = 1 - 2yy', y''' = -2yy'' - 2y'2, yiv = -2yy''' - 6y'y'', yv = -2yyiv -8y'y''' -6y''2 Since at x = 0, y = 1; y' = -1, 85 y'' = 3, y''' = -8, yiv = 34 and yv = -186 The fourth order Taylor's formula is y(x) = y(x0) + (x-x0) y'(x0, y0) + (x-x0)2 y''(x0, y0)/2! + (x-x0)3 y'''(x0, y0)/3! + (x-x0)4 yiv(x0, y0)/4! + h5yv(x0, y0)/5! +. . . CU IDOL SELF LEARNING MATERIAL (SLM)

= 1 - x + 3 x2/2! - 8 x3/3! + 34 x4/4! - 186 x5/5! (since x0 = 0) = 1 - x + 3 x2/2 - 4 x3/3 + 17 x4/12 - 31 x5/20 Now y(0.1) = 1 - (0.1) + 3 (0.1)2/2 - 4 (0.1)3/3 + 17 (0.1)4/12 - 31 (0.1)5/20 = 0.9 + 3 (0.1)2/2 - 4 (0.1)3/3 + 17 (0.1)4/12 - 31 (0.1)5/20 = 0.915 - 4 (0.1)3/3 + 17 (0.1)4/12 - 31 (0.1)5/20 = 0.9137 + 17 (0.1)4/12 - 31 (0.1)5/20 = 0.9138 - 31 (0.1)5/20 = 0.9138 Since the value of the last term does not add up to first four decimal places, the Taylor series formula of order four is sufficient to find y (0.1) accurate up to four decimal places. The range of values of x for which the above series truncated after the term containing x4, to compute y accurate up to four decimal places, can computed using 31 x5 / 20 ≤ 0.00005 ⇒ x ≤ 0.126 Euler’s Method: If we truncate the Taylor series at the first term we can rearrange this and solve for y' (t) Consider the differential equation 86 CU IDOL SELF LEARNING MATERIAL (SLM)

(1) Now we can attempt to solve (1) by replacing the derivative with a difference: Start with y (0) and step forward to solve for any time. What’s good about this? If the O term is something nice looking, this quantity decays with ∆t, so if we take ∆t smaller and smaller, this gets closer and closer to the real value. What can go wrong? The O term may be ugly. The errors can accumulate as I step forward in time. Also, even though this may be a good approximation for y0 (t) it may not converge to the right solution. To answer these questions, we look at this scheme in depth. Terminology: From now on, we’ll call yn the numerical approximation to the solution y(n∆t); tn = n∆t. Euler’s method can then be written This method assumes that you can move from one location to the next using the slope given by the equation (1). We saw last time that when we do this, our errors will decay linearly with ∆t. We will show this again today, but in two steps, so that we can generalize it. The proof should look very familiar! Local Truncation Error: To be able to evaluate what we expect the order of a method to look like, we look at the i.e., it is the residue when the exact solution of the ODE (1.1) is plugged into the numerical scheme. If yn is close to y(tn) then the LTE will be close to zero. The local truncation error represents the terms neglected by truncating the Taylor series. This is not the error that we get from the method, (i.e. the difference between the real solution and the numerical solution) but will be connected. If I don’t know y(t), what is the use of this definition? It turns out that even without explicit knowledge of the solution we can still calculate the LTE and use it as an estimate and control of the error, by placing certain smoothness assumptions on y(t) and using the Taylor Expansions. Clearly, at time tn, Euler’s method has Local Truncation Error: in other words, we can write this 87 CU IDOL SELF LEARNING MATERIAL (SLM)

Of course, the method is Subtract these two Because f is Lipschitz continuous, And so, if we let the Global Error been = |y(tn) − yn|, then we can bound the growth of this error: 88 CU IDOL SELF LEARNING MATERIAL (SLM)

Compare this with the local error: LT E ≤ ½ M∆t we see that the global error has the same order as the local error with a different coefficient in the estimates. They are related by the Lipschitz constant L and the final time T. The Order of a scheme r, is defined by |en| = O (∆t r). The higher the order of the scheme, the faster the error decays. Comment: The important thing to understand is that the Local Truncation Error is not always an indicator of what the global error will do. Schemes that have the same order of LTE and global error are good schemes. We need to define what makes the method have the property that the global error will be of same order as the LTE. Taylor Series Methods: To derive these methods we start with a Taylor Expansion: Let’s say we want to truncate this at the second derivative and base a method on that. The scheme is, then: The Taylor series method can be written as where F = f + ½ ∆tf’. If we take the LTE for this scheme, we get (as expected) 89 CU IDOL SELF LEARNING MATERIAL (SLM)

Of course, we designed this method to give us this order, so it shouldn’t be a surprise! So the LTE is reasonable, but what about the global error? Just as in the Euler Forward case, we can show that the global error is of the same order as the LTE. How do we do this? We have two facts where F = f + ½ ∆tf’. Now we subtract these two Now, if F is Lipschitz continuous, we can say en+1 ≤ (1 + ∆tL)en + ∆t|LT E|. Of course, this is the same proof as for Euler’s method, except that now we are looking at F, not f, and the LT E is of higher order. We can do this no matter which Taylor series method we use, how many terms we go forward before we truncate. Advantages and Disadvantages of the Taylor Series Method: advantages a) One step, explicit b) can be high order c) easy to show that global error is the same order as LTE disadvantages Needs the explicit form of derivatives of f. Example 1: Solve using Euler's Method: Use h = 0.1 Step 1 We start at the initial value (0,4) and calculate the value of the derivative at this point. We have: 90 CU IDOL SELF LEARNING MATERIAL (SLM)

We substitute our starting point and the derivative we just found to obtain the next point along. y(x+h) ≈ y(x) + hf(x,y) y(0.1) ≈ 4 + 0.1 (−1.75680249531) ≈3.82431975047 Step 2 Now we need to calculate the value of the derivative at this new point (0.1, 3.82431975047). We have: Once again, we substitute our current point and the derivative we just found to obtain the next point along. y (x + h) ≈ y(x)+hf (x, y) y (0.2) ≈ 3.82431975047+ 0.1(−1.8103864498) 3.64328110549 ≈ proceed for the required number of steps these values: We and obtain 91 CU IDOL SELF LEARNING MATERIAL (SLM)

Here's the graph of this solution. 92 CU IDOL SELF LEARNING MATERIAL (SLM)

The problem with Euler's Method is that you have to use a small interval size to get a reasonably accurate result. That is, it's not very efficient. Example 2: y′+2y=2−e−4t y (0) =1 Use Euler’s Method with a step size of h=0.1to find approximate values of the solution at t = 0.1, 0.2, 0.3, 0.4, and 0.5. Compare them to the exact values of the solution at these points. Solution: This is a fairly simple linear differential equation so we’ll leave it to you to check that the solution is In order to use Euler’s Method we first need to rewrite the differential equation y′ = 2− e−4t −2y From this we can see that f (t, y) = 2− e−4t −2y. Also note that t0=0 and y0=1. We can now start doing some computations. So, the approximation to the solution at t1=0.1 is y1=0.9. 93 At the next step we have, Therefore, the approximation to the solution at t2 = 0.2 is y2=0.852967995 CU IDOL SELF LEARNING MATERIAL (SLM)

Similarly, Here’s a quick table that gives the approximations as well as the exact value of the solutions at the given points. The formula for the error is: 4.3 RUNGE-KUTTA METHODS To avoid the disadvantage of the Taylor series method, we can use Runge-Kutta methods. These are still one step methods, but they depend on estimates of the solution at different points. They are written out so that they don’t look messy: Second Order Runge-Kutta Methods: 94 CU IDOL SELF LEARNING MATERIAL (SLM)

let’s see how we can choose the parameters a, b, α, β so that this method has the highest order LT E possible. Take the Taylor expansions to express the LTE: Fourth Order Runge-Kutta Methods: The second order method requires 2 evaluations of f at every timestep, the fourth order method requires 4 evaluations of f at every timestep. In general: For an rth order RungeKutta method we need S(r) evaluations of f for each timestep, where Practically speaking, people stop at r = 5. 95 CU IDOL SELF LEARNING MATERIAL (SLM)

Advantages of Runge-Kutta Methods 1. One step method – global error is of the same order as local error. 2. Don’t need to know derivatives of f. 3. Easy for” Automatic Error Control”. Automatic Error Control Uniform grid spacing – in this case, time steps – are good for some cases but not always. Sometimes we deal with problems where varying the grid size makes sense. How do you know when to change the step size? If we have a rth order scheme and r + 1th order scheme, we can take the difference between these two to be the error in the scheme, and make the step size smaller if we prefer a smaller error, or larger if we can tolerate a larger error. For Automatic error control yo are computing a” useless” (r+1) th order scheme . . . what a waste! But with Runge Kutta we can take a fifth order method and a fourth order method, using the same ks. only a little extra work at each step. Example 1: Use Runge-Kutta Method of Order 4 to solve the following, using a step size of h=0.1 for 0 ≤ x ≤1. Step 1 Note: The following looks tedious, and it is. We'll use a computer (not calculator) to do most of the work for us. The following is here so you can see how the formula is applied. We start with x=0 and y=1. We'll find the F values first 96 CU IDOL SELF LEARNING MATERIAL (SLM)

=0.9655827899 Using this new y-value, we would start again, finding the new F1, F2, F3 and F4, and substitute into the Runge-Kutta formula. We continue with this process, and construct the following table of Runge-Kutta values. (I used a spreadsheet to obtain the table. Using calculator is very tedious, and error-prone.) 97 CU IDOL SELF LEARNING MATERIAL (SLM)

We haven't included any values after y in the bottom row as we won't be using them. 98 CU IDOL SELF LEARNING MATERIAL (SLM)

Example 2: Solve the following using RK4 (Runge-Kutta Method of Order 4) for 0 ≤ x ≤ 2. Use a step size of h=0.2: Solution: Step 1 We have x0 =0 and y0=5. F1=h f (x, y) =0.2((0+5) sin (0)(5)) = 0 For F2, we need to know: We substitute these into the F2 expression: 99 For F3, we need to know: So CU IDOL SELF LEARNING MATERIAL (SLM)


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook