01 01 01 = 1 0 + 1 0 + 1 0 = (0-1)+(0-1)+ (0-1) = -3 011 1 0 1 0(0 1) 1(0 1) 1(0 1) 2 110 S3 Determinant value of A = 3 02 3 2 0 Characteristic equation is Thus the eigen values of A are -1,-1,2 To find Eigen vectors: Solve (A-λI) X=0 0 1 1 x1 0 0 1 x 2 0 0 x 3 0 1 1 0 2 1 1 x1 0 2 x2 0 1 1 1 x3 0 (i) when λ= 2, 1 2 2x1 x2 x3 0 (1) x1 2x2 x3 0 (2) x1 x2 2x3 0 (3 151 CU IDOL SELF LEARNING MATERIAL (SLM)
Solving (1) and (2) by rule of cross multiplication, we get x1 x2 x3 1 2 1 2 41 x1 x2 x3 333 x1 x2 x3 111 1 1 .∙. The eigen vectors are given by X1 = 1 (ii) when λ=-1 1 1 1 x1 0 1 1 0 1 x2 1 1 1 x3 0 x1 x2 x3 0 (4) x1 x2 x3 0 (5) x1 x2 x3 0 (6) Sole the above the three equations reduce to one x1 + x2 + x3 = 0. There is one equation in three unknowns. Put x1 0 we get x2 x3 0 x2 x3 x2 x3 1 1 152 CU IDOL SELF LEARNING MATERIAL (SLM)
0 X2 1 Hence one Eigen vector is 1 To get another Eigen vector, Put x2 0 we get x1 x3 0 x1 x3 x1 x3 1 1 1 X3 0 Hence another Eigen vector is 1 Example: 4 Find the Eigen values and Eigen vectors of the matrix 2 2 3 1 6 A 2 1 2 0 Solution: The characteristic equation of the matrix A is A I 0 λ3 - S1λ2 + S2λ-S3 =0 where S1 Sum of main Diagonal elements =- 2+ 1+ 0 = -1 153 CU IDOL SELF LEARNING MATERIAL (SLM)
S2 Sum of the minors of the main diagonal elements 1 6 2 3 2 2 = 2 0 + 1 0 + 2 1 = (0-12)+(0-3)+ (-2-4) = -21 S3 Determinant value of A 2 2 3 2 1 6 1 2 0 = (2)(0 12) (2)(0 6) (3)(4 1) 24 12 9 45 3 2 21 45 0 Characteristic equation is Thus the Eigen values of A are -3, -3, 5 To find Eigen vectors: solve (A- I) X=0 2 2 3 x1 0 1 x 0 2 2 6 2 0 1 0 x3 (i) When λ=-3, 1 2 3 x1 0 4 6 x 0 2 2 3 2 0 1 x3 we get only one Independent equation from 154 CU IDOL SELF LEARNING MATERIAL (SLM)
x1 2x2 3x3 0 (1) 2x1 4x2 6x3 0 (2) x1 2x2 3x3 0 (3) from(1), (2)and (3) we get x1 2x2 3x3 0 Case1: putx1 0 we get x2 x3 32 0 3 .∙. The Eigen vectors are given by X1 = 2 Case 2 : put x2 =0 in x1 2x2 3x3 0 we get x1 x3 31 3 0 .∙. The Eigen vectors are given by X2= 1 7 2 3 x1 0 6 0 2 4 x2 (ii) When λ=5 1 2 5x3 0 7x1 2x2 3x3 0 (4) x1 2x2 3x3 0 (5) x1 2x2 5x3 0 (6) By using method of cross multiplication, we get from (4) and (5) 155 CU IDOL SELF LEARNING MATERIAL (SLM)
x1 x2 x3 12 12 6 42 28 4 x1 x2 x3 1 2 1 1 X3 2 Hence the Eigen vectors are given by 1 ` Example: 5 7 2 0 Find the eigen values and eigen vectors of the matrix A 2 6 2 0 2 5 Solution: The characteristic equation of the matrix A is A I 0 λ3 - S1λ2 + S2λ-S3 =0 where S1 sum of main Diagonal elements =7+6+ 5 = 18 S2 Sum of the minors of the main diagonal elements 6 2 7 0 7 2 = 2 5 + 0 5 + 2 6 = (30-4)+(35-0)+ (42-4) = 99 S3 Determinant value of A 156 CU IDOL SELF LEARNING MATERIAL (SLM)
7 2 0 2 6 2 (7)(30 4) (2)(10 0) (0)(4 0) 0 2 5 182 20 162 3 2 21 45 0 Characteristic equation is Thus the eigen values of A are 3,6,9 To find Eigen vectors: solve (A- I) X=0 7 2 0 x1 0 6 x 0 2 2 2 2 0 0 5 x3 4 2 0 x1 0 2 3 2 x 2 0 (i) when λ=3, 0 2 2 x 3 0 we get only one Independent equation (1) (2) 4 x1 2 x2 0 x3 0 (3) 2 x1 3x2 2 x3 0 0 x1 2 x2 2 x3 0 Solving (2) and (3) by rule of cross multiplication, we get x1 x2 x3 64 04 40 x1 x2 x3 244 x1 x2 x3 122 157 CU IDOL SELF LEARNING MATERIAL (SLM)
1 2 .∙. The eigen vectors are given by X1 = 2 1 2 0 x1 0 2 2 0 0 x2 (ii) when λ=6, 0 2 1 x3 0 x1 2x2 0x3 0 (4) 2x1 0x2 2x3 0 (5) 0x1 2x2 x3 0 (6) By using method of cross multiplication, we get from (5) and (6) x1 x2 x3 04 02 40 x1 x2 x3 4 2 4 x1 x2 x3 2 1 2 2 X2 1 Hence the corresponding eigen vector is 2 2 2 0 x1 0 2 2 0 3 x2 (iii)when λ=9, 0 2 4x3 0 2x1 2x2 0x3 0 (7) 2x1 3x2 2x3 0 (8) 0x1 2x2 4x3 0 (9) 158 CU IDOL SELF LEARNING MATERIAL (SLM)
From (7 )and (8) we get x1 x2 x3 12 4 0 8 4 0 x1 x2 x3 8 8 4 x1 x2 x3 2 2 1 2 X 3 2 Hence the eigen vectors are given by 1 Example: 6 Verify that the sum of Eigen values of A equals the trace of A and that their product equals |A|, for the matrix 1 0 0 A 0 3 1 0 1 3 Solution: The characteristic equation of A is (l – λ)(λ2 – 6λ + 8) =0 The Eigenvalues of A are λ = 1, 2, 3. Sum of the Eigenvalues = 7. Trace of the matrix = 1 + 3 + 3 = 7 Product of the Eigenvalues = 8. |A| = 8. Hence the properties verified. Example: 7 159 CU IDOL SELF LEARNING MATERIAL (SLM)
2 0 1 A 0 2 0 Find the Eigenvalues and eigenvectors of (adj A), given that the matrix 1 0 2 Solution: The characteristic equation of A is (2 – λ)3 – (2 – λ) =0 i.e. (2 – λ) (λ2 – 4λ + 3) =0 The Eigen values of A are λ = 1, 2, 3. Case (i) λ = 1. 1 0 1 x1 0 1 0 x2 0 The eigenvector is given by 1 0 1 x3 1 x1 x2 x3 X1 0 1 0 1 1 Case (ii) λ = 2. 0 0 1 x1 0 0 0 x2 0 The eigenvector is given by 1 0 0 x3 i.e. –x3 = 0 and –x1 = 0. x1 = 0 and x3 = 0 and x2 is arbitrary. Let x2 = 1 0 X 2 1 0 Case (iii) λ = 3. 160 CU IDOL SELF LEARNING MATERIAL (SLM)
1 0 1 x1 0 1 0 x2 0 The eigenvector is given by 1 0 1x3 x1 x2 x3 1 X3 0 1 0 1 1 The eigen values of A–1 are 1, 1, 1with the eigenvectors X , X , X. 23 1 2 3 Now adj A=A−1 A i.e., adj A = |A| · A–1 = 6A–1 (∵ A = 6 for the given matrix A) The eigen values of (adj A) are equal to 6 times those of A–1, namely, 6, 3, 2. The corresponding eigenvectors are X1, X2, X3 respectively. Example: 8 Verify that the eigenvectors of the real symmetric matrix 3 1 1 A 1 5 1 1 1 3 are orthogonal pairs. Solution: 161 CU IDOL SELF LEARNING MATERIAL (SLM)
The characteristic equation of A is λ3 – 11λ2 + 36λ – 36 =0 i.e. (λ – 2) (λ – 3) (λ – 6) = 0 The eigen values of A are λ = 2, 3, 6. Case (i) λ = 2. 1 1 1 x1 1 1 3 x2 0 The eigenvector is given by 1 1 1 x3 1 x1 x2 x3 X1 0 2 0 2 1 Case (ii) λ = 3. 0 1 1 x1 1 1 2 x2 0 x1 x2 x3 The eigenvector is given by 1 1 1 1 1 0 x3 1 X 2 1 1 Case (iii) λ = 6. 3 1 1 x1 1 1 1 x2 0 The eigenvector is given by 1 1 3x3 x1 x2 x3 1 2 4 2 X 3 2 1 162 CU IDOL SELF LEARNING MATERIAL (SLM)
1 11 0 X T 1 X2 1 0 Now 1 1 1 2 0 TX 2 X3 1 1 1 1 1 XT 2 3 X1 1 0 0 1 . Hence the eigen vectors are orthogonal in pairs. Example: 9 1 1 1 1 1 1 Find the sum and product of the Eigen values of the matrix. 1 1 1 Solution: Sum of eigen values = Sum of the diagonal elements. = -1-1-1 = -3. The product of the eigen values of a matrix A is a equal to its determinant. 1 1 1 = 4. 1 1 1 .∙. Product of Eigen values = 1 1 1 Example: 10 163 CU IDOL SELF LEARNING MATERIAL (SLM)
For a given matrix A of order three, A 32 and two of its Eigen values are 8 and 2. Find the sum of the Eigen values. Solution: Let the three eigen values of A be λ1 ,λ2 , λ3. The product of three Eigen values is A λ1 λ2 λ3 = 32(given) 16λ3 = 32 λ3 = 2. Sum of Eigen values = 8+2+2 =12 Example: 11 A3 A 3 2 1 2 Obtain the Eigen value of where Solution: The Characteristic equation of A is A I 0 . A 3 2 1 2 Given 3 2 0 A I 1 2 164 CU IDOL SELF LEARNING MATERIAL (SLM)
2 5 4 0 ( 1)( 4) 0 1,4. So, the Eigen values of A3 are 13, 43 (ie) 1, 64. Example: 12 8 6 2 6 7 4 Find the sum and product of the Eigen values of the matrix A= 2 4 3 Solution: Sum of the Eigen values = Sum of the main diagonal elements = 8 + 7 + 3 = 18 Product of the Eigen values =A 8 6 2 6 7 4 8[21 16] 6[18 8] 2[24 14] 0 3 2 4 Example: 13 3 1 1 A 1 5 1 Two of the Eigen values of 1 1 3 are 3 & 6. Find the Eigen values of A-1 and -5A. Solution: Sum of Eigen values = Sum of main diagonal elements. 165 CU IDOL SELF LEARNING MATERIAL (SLM)
Let us consider the 3rd Eigen value be λ 36 353 11 9 2 Eigen values are 2, 3, 6. Hence Eigen values of A-1 are ½, 1/3, 1/6. Eigen values of -5A are -10,-15,-30. Example: 14 6 2 2 A 2 3 1 If one of the Eigen vectors of a matrix 2 1 3 is (2 -1 1) T Find the corresponding Eigen value. Solution: If λ is an Eigen value then A IX 0 6 2 2 2 2 3 1 1 0 2 1 3 1 2(6-λ) +2+2 =0 λ = 8. Example: 15 166 CU IDOL SELF LEARNING MATERIAL (SLM)
1 7 5 0 2 9 What is the sum of squares of the Eigen values of 0 0 5 Solution: The Eigen values of a triangular matrix are just the principal diagonal elements of the matrix. 1 7 5 A 0 2 9 Given matrix is 0 0 5 1 7 5 0 2 9 0 A I 0 0 0 5 The sum of squares of Eigen values are = 12+22+52 = 1+4+25 = 30 Example: 16 If 2, 2, 3 are Eigen values of the matrix A, find the eigen values of adjoint of A, where 3 10 5 A 2 3 4 3 5 7 Solution: A The Eigen value of adjoint of a matrix = Eigen value of A 167 CU IDOL SELF LEARNING MATERIAL (SLM)
Here A 12 The eigen value of adjoint of A = 12 6 , 12 6 , 12 4 2 2 3 Example: 17 2 1 1 A 1 3 1 Two of the Eigen values of 1 2 2 are equal to 1 each. Find the Eigen values of A-1 Solution: Sum of the Eigen values = Sum of the main diagonal elements. Let us consider the 3rd Eigen value be λ 11 23 2 72 5 Eigen values are 1, 1, 5. Hence Eigen values of A-1 are 1,1,1/5. 168 CU IDOL SELF LEARNING MATERIAL (SLM)
5.5 SUMMARY ● This chapter explains what linear algebra is and how it relates to vectors and matrices in this Linear Algebra course. ● We'll go over what vectors and matrices are and how to use them to solve problems, as well as the thorny problem of Eigenvalues and Eigenvectors. ● To utilise these to do interesting things with datasets, such as rotating photos of faces and extracting eigenvectors to investigate how the Pagerank algorithm works. ● By the end of this course, you'll have an intuitive grasp of vectors and matrices that will help you bridge the gap between linear algebra problems and machine learning. 5.6 KEYWORD • Homogeneous: made up of parts that are all of the same type. • Eigen Values : Eigenvalues are the special set of scalar values that is associated with the set of linear equations most probably in the matrix equations. • Eigen vectors: Eigenvectors are a special set of vectors associated with a linear system of equations (i.e., a matrix equation) that are sometimes also known as characteristic vectors, proper vectors, or latent vectors • Trace : The sum of the principal diagonal elements is called the Trace of the matrix • Trivial solution : the system of equation in which the determinant of the coefficient matrix is not zero but the solution is x=y=z=0 is called trivial solution 169 CU IDOL SELF LEARNING MATERIAL (SLM)
5.7 LEARNING ACTIVITY 0 1 1 1 0 1 1. Find the sum and product of the Eigen values of the matrix. 1 1 0 ___________________________________________________________________________ _______________________________________________________________ 2. If sum of two Eigen values of a 3 X 3 matrix is equal to its trace. Find the determinant value of the matrix. ___________________________________________________________________________ ___________________________________________________________________ 5.8 UNIT END QUESTIONS A. Descriptive Questions Short Questions 8 6 2 6 7 4 1. Find the sum and product of the Eigen values of the matrix A= 2 4 3 6 2 2 2 3 1 2. The product of two Eigen values of the matrix A = 2 1 3 is 16.Find the third Eigen value. 170 CU IDOL SELF LEARNING MATERIAL (SLM)
2 1 0 0 3 4 3. Find the Eigen values of the inverse of the matrix A = 0 0 4 4. If the sum of two Eigen values and trace of 3x 3 matrix A are equal.Find the value of A. 2 2 1 1 3 1 5. Two Eigen values of matrix A = 1 2 2 are equal to 1 each. Find the Eigen value of A-1. Long Questions A 1 11 3 1. Find the Eigen value and Eigen vector of 2 0 1 0 2 0 2. Find the Eigen value of adj (A) if A = 1 0 2 2 2 1 A 1 3 1 3. Two Eigen values of the matrix 1 2 2 equal to 1 each. Find the third Eigenvalue. 4. Find the Eigen values and Eigen vectors of the matrix 171 CU IDOL SELF LEARNING MATERIAL (SLM)
3 1 4 1 1 3 (i)A 0 2 6 (ii)A 1 5 1 0 0 5 3 1 1 5. Find the Eigen values and Eigen vectors of the matrix 7 2 0 A 2 6 2 0 2 5 B. Multiple Choice Questions 1. Find the Eigen vector for value of λ=-2 for the given matrix. A 3 5 3 1 a. 01 b. 11 c. 11 d. 1 0 172 CU IDOL SELF LEARNING MATERIAL (SLM)
2. Find the Eigen vector for value of λ=3 for the given matrix. 3 0 5 A 2 3 4 3 5 7 1 1 a. 2 1 1 b. 2 1 1 c. 2 1 2 d. 2 3. Which of the following is not a necessary condition for a matrix, say A, to be diagonalizable? 173 CU IDOL SELF LEARNING MATERIAL (SLM)
a. A must have n linearly independent Eigen vectors b. All the Eigen values of A must be distinct c. A can be an idempotent matrix d. A must have n linearly dependent Eigen vectors 1 0 6 A 0 5 0 4. Find the trace of the matrix 0 4 4 a. 0 b. 10 c. 4 d. 1 5. The determinant of the matrix whose eigen values are 4, 2, 3 is given by, _______ a.9 b.24 c.5 d.3 A 3 2 1 2 6. Obtain the Eigen value of A where a. 1, -4 174 CU IDOL SELF LEARNING MATERIAL (SLM)
b. -1, 4 c. 1, 4 d. -1, -4 1 0 0 0 3 1 7. The product of the Eigen values of matrix 0 1 3 is a. 6 b. 7 c. 8 d. 9 1 1 1 1 1 1 8. The sum of the Eigen values of the matrix 1 1 1 are a. 3 b. -3 c. 2 d. -2 175 CU IDOL SELF LEARNING MATERIAL (SLM)
9. For a given matrix A of order three, A 32 and two of its Eigen values are 8 and 2. Find the sum of the Eigen values. a. 12 b. 13 c. 14 d. 15 3 1 1 A 1 5 1 10. Two of the Eigen values of 1 1 3 are 3 & 6 then the Eigen values of A-1 are a. ½, -1/3, 1/6. b. ½, 1/3,- 1/6. c. -½, 1/3, 1/6. d. ½, 1/3, 1/6. Answers 1-b, 2-a, 3-d, 4-b, 5-b, 6-c, 7-c, 8-b, 9-a, 10-d. 5.9 REFERENCES References book 176 CU IDOL SELF LEARNING MATERIAL (SLM)
● Vittal, P.R, “Allied Mathematics”, Reprint,Margham Publications, Chennai. ● Venkata chalapathy, S.G, “Allied Mathematics”, Margham Publications, Chennai. Textbook references ● Singaravelu, A. “Allied Mathematics”, Meenakshi Agency, Chennai. ● N. Herstein, Topics in Algebra, John Wiley and Sons, 2015. ● Gilbert Strang, Introduction to linear algebra, Fifth Edition, ANE Books, 2016. 177 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT - 6: LINEAR PROGRAMMING 1 STRUCTURE 6.0 Learning Objectives 6.1 Introduction 6.2 Model in Linear Programming 6.3 Related terminology such as constraints 6.4 Manipulating a Linear Programming Problem 6.5 Summary 6.6 Keywords 6.7 Learning Activity 6.8 Unit End Questions 6.9 References 6.0 LEARNING OBJECTIVES After studying this unit, you will be able to: ● Identify the role of mathematical models in operations decision making. ● Understand how to construct the constrained optimization models. ● Describe the advantages and disadvantages of using optimization models. ● Describe the assumptions of linear programming. 178 CU IDOL SELF LEARNING MATERIAL (SLM)
6.1 INTRODUCTION Operations Research: Linear programming was created during World War II, when the need for a method to maximise resource efficiency was critical. New war-related projects demanded attention, causing resources to be spread thin. “Programming” was a military word for actions such as efficiently preparing schedules and optimally deploying soldiers. The Simplex technique of optimization was created by George Dantzig, a member of the United States Air Force, in 1947 to give an efficient approach for solving programming problems with linear structures. Experts from a range of domains, particularly mathematics and economics, have since developed and investigated the theory underpinning \"linear programming.\" Assumptions Before we get too focused on solving linear programs, it is important to review some theory. For instance, several assumptions are implicit in linear programming problems. These assumptions are: 1. Proportionality The contribution of any variable to the objective function or constraints is proportional to that variable. This implies no dis5 counts or economies to scale. For example, the value of 8x1 is twice the value of 4x1, no more or less. 2. Additivity The contribution of any variable to the objective function or constraints is independent of the values of the other variables. 3. Divisibility 179 CU IDOL SELF LEARNING MATERIAL (SLM)
Decision variables can be fractions. However, by using a special technique called integer programming, we can bypass this condition. Unfortunately, integer programming is beyond the scope of this paper. 4. Certainty This assumption is also called the deterministic assumption. This means that all parameters (all coefficients in the objective function and the constraints) are known with certainty. Realistically, however, coefficients and parameters are often the result of guess-work and approximation. The effect of changing these numbers can be determined with sensitivity analysis 6.2 MODEL IN LINEAR PROGRAMMING A diagram of a system, theory, or phenomena that accounts for its known or inferred qualities and can be used to investigate its characteristics further. System: A group of items that are functionally connected, particularly: The human body is seen as a physiologically functional unit. An organism in its entirety, particularly in terms of its vital activities or functions. The neurological system and the skeletal system are two examples of physiologically or physically complementary organs or parts. A collection of mechanical or electrical components that interact. A communication, transit, or distribution network made up of structures and channels. A collection of computer software, hardware, and data transmission devices that function together. The study of mathematical models for large organisational systems is known as operations research (OR). Optimization is a subset of OR that employs mathematical approaches such as linear and nonlinear programming to determine optimal values for system variables. 180 CU IDOL SELF LEARNING MATERIAL (SLM)
Linear Programming: A single objective function, which represents either a profit to be maximised or a cost to be minimised, and a set of constraints that limit the decision variables are typically used. The decision variables' objective function and constraints are all linear functions. Software that can solve problems with millions of variables and tens of thousands of constraints has been developed. Network Flow Programming The linear programme is a subset of the more generic linear programme. The transportation problem, the assignment problem, the shortest path problem, the maximum flow problem, and the minimal cost flow problem are among the problems covered. There are extremely efficient algorithms that are many times more efficient than linear programming in terms of computer time and space utilisation. Integer Programming Some of the variables are required to take on discrete values. NP-hard: Most problems of practical size are very difficult or impossible to solve. Nonlinear Programming The objective and/or any constraint is nonlinear. In general, much more difficult to solve than linear. Most (if not all) real world applications require a nonlinear model. In order to be make the problems tractable, we often approximate using linear functions. Models Dynamic Programming 181 CU IDOL SELF LEARNING MATERIAL (SLM)
A DP model is a representation of a process that includes states, decisions, transitions, and returns. The procedure starts at a point where a decision is taken. A transition to a new state occurs as a result of the decision. A return is realised based on the starting state, ending state, and decision. The process progresses through a series of states until it reaches a final state. The goal is to determine which sequence optimizes the total return. It is possible to handle objectives with very general functional forms, and a global optimal solution is always found. The \"Curse of Dimensionality\" states that the number of states grows exponentially as the problem's dimensions increase. Models Stochastic Processes In many practical situations the attributes of a system randomly change over time. Examples include the number of customers in a checkout line, congestion on a highway, the number of items in a warehouse, and the price of a financial security to name a few. The model is described in part by enumerating the states in which the system can be found. The state is like a snapshot of the system at a point in time that describes the attributes of the system. Events occur that change the state of the system. Models Stochastic Processes In many real-world circumstances, a system's characteristics alter at random throughout time. Customers in a checkout line, traffic on a highway, the amount of things in a warehouse, and the price of a financial security are just a few examples. 182 CU IDOL SELF LEARNING MATERIAL (SLM)
Enumerating the states in which the system can be found is one way of describing the model. The state is a snapshot of the system at a specific point in time that describes the system's characteristics. Events occur that cause the system's state to change. Consider a system with an Automated Teller Machine (ATM). The number of customers at or waiting for the machine is the status. The linear measure of the system's movement is time. Events are arrivals and departures. Models Markov Chains A stochastic process that can be monitored at regular intervals, such as once a day or once a week, can be described by a matrix that specifies the probability of shifting from one state to another in one time interval. The process is known as a Markov Chain if this matrix does not change over time. A range of system metrics that can be used to examine and evaluate a Markov Chain model can be computed using computational approaches. Markov Processes The duration of all state-changing activities is exponentially distributed in a continuous time stochastic process. Time is a constant variable. A matrix depicting the pace of transition from one state to the next completely describes the process. The rates are the parameters of the exponential distributions that they are connected with. The analytical outcomes resemble those of a Markov Chain. Simulation It's not always easy to get a closed form expression for a stochastic system's behaviour. 183 CU IDOL SELF LEARNING MATERIAL (SLM)
Simulation is a technique for estimating statistical measures of complicated systems that is very generic. The random variables in a system are modelled as though they were known. The variables' values are then picked at random from their known probability distributions. The system response is seen once for each replication. One can derive statistics on the findings by simulating a system in this way for many replications and recording the responses. The statistics are used in the evaluation and planning of the project. Time Series and Forecasting A time series is a collection of observations of a random variable that occurs on a regular basis. Typically, they're used to feed into OR decision models. An inventory model, for example, necessitates forecasts of future demand. For example, a university department's course scheduling and staffing model involves projections of future student inflow. For example, projections of river flows for the near future are required in a model for delivering warnings to the population in a river basin. Inventory Theory Inventories are materials that have been stored, are being processed, or are now being processed. When should raw materials be ordered, and how much should be ordered? When should a production order be sent to the factory? What level of safety stock should a retailer keep on hand? In a manufacturing process, how is in-process inventory managed? Reliability Theory Attempts to quantify the likelihood of a system failing. Probability modelling is fundamentally an issue of estimating reliability. In the telecommunications and networking business, this is extremely significant. Operations R: An Overview. 184 CU IDOL SELF LEARNING MATERIAL (SLM)
6.3 RELATED TERMINOLOGY SUCH AS CONSTRAINTS A linear program consists of a set of variables, a linear objective function indicating the contribution of each variable to the desired outcome, and a set of linear constraints describing the limits on the values of the variables. The \"answer\" to a linear program is a set of values for the problem variables that results in the best -- largest or smallest -- value of the objective function and yet is consistent with all the constraints. Formulation is the process of translating a real-world problem into a linear program. Once a problem has been formulated as a linear program, a computer program can be used to solve the problem. In this regard, solving a linear program is relatively easy. The hardest part about applying linear programming is formulating the problem and interpreting the solution. Linear function: All of the equations in a linear program must, by definition, be linear. A linear function has the following form: a0 + a1 x1 + a2 x2 + a3 x3 + . . . + an xn = 0 In general, these are called the coefficients of the equation; they are also sometimes called parameters. The important thing to know about the coefficients is that they are fixed values, based on the underlying nature of the problem being solved. The x's are called the variables of the equation; they are allowed to take on a range of values within the limits defined by the constraints. Note that it is not necessary to always use x's to represent variables; any label could be used, and more descriptive labels are often more useful. Linear equations and inequalities are often written using summation notation, which makes it possible to write an equation in a much more compact form. The linear equation above, for example, can be written as follows: 185 CU IDOL SELF LEARNING MATERIAL (SLM)
Decision Variables: In a linear programme, the variables are a collection of values that must be found in order to solve the problem; the problem is solved once the best values for the variables have been established. Because the challenge is deciding what value each variable should take, the variables are sometimes referred to as decision variables. The variables usually describe the amount of a resource to use or the intensity of an activity. One of the most difficult and/or critical tasks in framing a problem as a linear programme is specifying the variables of the problem. Sometimes creative variable definition can be used to dramatically reduce the size of the problem or make an otherwise non-linear problem linear. As mentioned earlier, a variety of symbols, with subscripts and superscripts as needed, can be used to represent the variables of a linear program. As a general rule, it is better to use variable names that help you remember what the variable represents in the real world. For this general introduction, the variables will be represented -- very abstractly -- as X1 , X2 , . . ., Xn . Objective Function: In a linear programming issue, the goal is to maximise or minimise a numerical value. This value could be the expected net present value of a project or a forest property; or it could be the project's cost; it could also be the amount of wood produced, the expected number of visitor- days at a park, the number of endangered species saved, or the amount of a specific type of habitat to be preserved. Linear programming is a very broad approach, and its applications are only limited by human creativity and imagination. 186 CU IDOL SELF LEARNING MATERIAL (SLM)
The objective function indicates how much each variable contributes to the value to be optimized in the problem. The objective function takes the following general form: where ci = the objective function coefficient corresponding to the ith variable, and Xi = the ith decision variable. The coefficients of the objective function indicate the contribution to the value of the objective function of one unit of the corresponding variable. For example, if the objective function is to maximize the present value of a project, and Xi is the ith possible activity in the project, then ci (the objective function coefficient corresponding to Xi ) gives the net present value generated by one unit of activity i. As another example, if the problem is to minimize the cost of achieving some goal, Xi might be the amount of resource i used in achieving the goal. In this case, ci would be the cost of using one unit of resource i. Constraints: Constraints define the possible values the variables of a linear programming problem may take. They typically represent resource constraints, or the minimum or maximum level of some activity. They take the following general form: where 187 CU IDOL SELF LEARNING MATERIAL (SLM)
Xi = the ith decision variable, aj, i = the coefficient on Xi in constraint j, and bj = the right-hand-side coefficient on constraint j. Note that j is an index that runs from 1 to m, and each value of j corresponds to a constraint. Thus, the above expression represents m constraints (equations, or, more precisely, inequalities) with this form. Resource constraints are a common type of constraint. In a resource constraint, the coefficient aj, i indicates the amount of resource j used for each unit of activity i, as represented by the value of the variable Xi . The right-hand side of the constraint (bj ) indicates the total amount of resource j available for the project. Non-negativity Constraints: Linear programme variables must always have non-negative values for technical reasons (i.e., they must be greater than or equal to zero). This non-negativity requirement will be fair — even necessary — in most circumstances, when the variables represent the levels of a set of activities or the amounts of some resource used. If you really want to let a variable take on a negative value, there are some formulation \"tricks\" you can do. These techniques, however, are outside the scope of this course. In any case, the non-negativity constraints are part of all LP formulations, and you should always include them. They are written as follows: Xi 0 i = 1, 2, . . ., n where Xi = the ith decision variable. 188 CU IDOL SELF LEARNING MATERIAL (SLM)
6.4 MANIPULATING A LINEAR PROGRAMMING PROBLEM Many linear problems do not follow the canonical form described in the introduction at first, which is crucial when considering the Simplex algorithm. Inequalities may be used as constraints, variables may not have a non-negative constraint, or the problem may desire to maximise rather than minimise z. Now we'll look at how to transform problems into the proper shape. Inequalities in Constraints We begin by addressing the problem of converting all constraints in a linear programming problem to stringent equalities. We eliminate this concern by adding new variables to the problem that indicate the difference between the left and right sides of the restrictions. Inequalities are transformed into equalities by subtracting a slack variable from a \"greater than or equal to\" constraint or by adding an excess variable to a \"less than or equal to\" constraint. For example, the constraint 4x1 + x2 ≤ 3 becomes 4x1 + x2 + e1 = 3 with the addition of e1 ≥ 0. If the constraint were originally 4x1 + x2 ≥ 3, the additional surplus variable s1 must be subtracted (4x1 + x2 − s1 = 3) so that s1 can be a strictly nonnegative variable. Linear Program: We can reduce the structure that characterizes linear programming problems into the following form: Minimize c1x1 + c2x2 + · · · + cnxn = z Subject to a11x1 + a12x2 + · · · + a1nxn = b1 a21x1 + a22x2 + · · · + a2nxn = b2 189 CU IDOL SELF LEARNING MATERIAL (SLM)
............... am1x1 + am2x2 + · · · + amnxn = bm x1, x2, . . . , xn ≥ 0. In linear programming z, the expression being optimized is called the objective function. The variables x1, x2 . . . xn are called decision variables, and their values are subject to m + 1 constraints (every line ending with a bi , plus the non-negativity constraint). A set of x1, x2 . . . xn satisfying all the constraints is called a feasible point and the set of all such points is called the feasible region. The solution of the linear program must be a point (x1, x2, . . . , xn) in the feasible region, or else not all the constraints would be satisfied. The following example from Chapter of Winston illustrates that geometrically interpreting the feasible region is a useful tool for solving linear programming problems with two decision variables. The linear program is: Minimize 4x1 + x2 = z Subject to 3x1 + x2 ≥ 10 x1 + x2 ≥ 5 x1 ≥ 3 and x1, x2 ≥ 0. We plotted the system of inequalities as the shaded region in Figure 1. Since all of the constraints are “greater than or equal to” constraints, the shaded region above all three lines is the feasible region. 190 CU IDOL SELF LEARNING MATERIAL (SLM)
The solution to this linear program must lie within the shaded region. Recall that the solution is a point (x1, x2) such that the value of z is the smallest it can be, while still lying in the feasible region. Since z = 4x1 + x2, plotting the line x1 = (z − x2)/4 for various values of z results in isocost lines, which have the same slope. Along these lines, the value of z is constant. In below Figure, the dotted lines represent isocost lines for different values of z. Since isocost lines are parallel to each other, the thick dotted isocost line for which z = 14 is clearly the line that intersects the feasible region at the smallest possible value for z. Therefore, z = 14 is the smallest possible value of z given 4 the constraints. This value occurs at the intersection of the lines x1 = 3 and x1 + x2 = 5, where x1 = 3 and x2 = 2. 191 CU IDOL SELF LEARNING MATERIAL (SLM)
6.5 SUMMARY ● Constraints are limitations or restrictions placed on the decision factors. ● The decision variables in all linear programmes should always be non-negative. This means that decision variables should have values that are larger than or equal to 0. ● A linear programming problem is one in which we have to find optimal value (maximum or minimum) of a linear function of several variables (called objective function) subject to certain conditions that the variables are non-negative and satisfying by a set of linear inequalities with variables, are sometimes called division variables. ● The common region determined by all the constraints including non-negative constraints x,y>0 of a linear programming problem is called the feasible region for the problem. The region other than the feasible region is called an infeasible region. The feasible region is always a convex polygon. ● Points within and on the boundary of the feasible region represent feasible solutions of the constraints. Any point outside the feasible region is called an infeasible solution. ● Any point in the feasible region that gives the optimal value of the objective function is called the optimal feasible solution. ● A feasible region of a system of linear inequalities is said to be bounded, if it can be enclosed within a circle. Otherwise, it is called unbounded. 192 CU IDOL SELF LEARNING MATERIAL (SLM)
6.6 KEYWORDS • Objective function: The objective function is a function in the LPP which is to be optimized. • Constraints: The linear inequalities or equations or restrictions on the variables of a linear programming problem are called constraints. • Isocost line : An isocost line is a curve which shows various combinations of inputs that cost the same total amount • Feasible solution: A feasible solution to a linear program is a solution that satisfies all constraints. • Feasible region: The feasible region in a linear program is the set of all possible feasible solutions 6.7 LEARNING ACTIVITY 1. What are the three components of LPP? ___________________________________________________________________________ _____________________________________________________________________ 2. How do you identify constraints in linear programming? ___________________________________________________________________________ _____________________________________________________________________ 193 CU IDOL SELF LEARNING MATERIAL (SLM)
6.8 UNIT END QUESTIONS A. Descriptive Questions Short Questions: 1. What are the constraints in an LPP? 2. Why is it important for an objective and its constraints to be linear? 3. What do you mean by the objective function and constraints in linear programming? 4. What are the objectives of linear programming problems? 5. What are your constraints? Long Questions: 1. Explain the linear programming with examples. 2. Write brief notes about the importance of linear programming. 3. Explain the characteristics of LPP. 4. Explain the LPP and its limitations. 5. Explain the role of LPP in project constraints with examples. B. Multiple Choice Questions 1. The objective of linear programming for an objective function is to a. maximize or minimize b. Subset or proper set modelling c. Row or column modelling 194 CU IDOL SELF LEARNING MATERIAL (SLM)
d. Adjacent modelling 2. For a linear programming equation, the convex set of equations is included in the region a. Feasible solutions b. Diposed solutions c. Profit solutions d. Loss solutions 3. The linear programming used to optimize mathematical procedure and is a. Subset of mathematical programming b. dimension of mathematical programming c. linear mathematical programming d. All of these 4. In linear programming, the objective function and objective constraints are a. Solved b. Linear c. Quadratic d. Adjacent 195 CU IDOL SELF LEARNING MATERIAL (SLM)
5. A set of values of decision variables which satisfies the linear constraints and nn- negativity conditions of a L.P.P. is called its a. Unbounded solution b. Optimum solution c. Feasible solution d. None of these Answers 1-a, 2-a, 3-a. 4-b, 5-c 6.9 REFERENCES References book ● Taha, H.A., “Operations Research: An Introduction”, 9th Edition, Pearson Education, Asia, New Delhi, 2016. ● Vittal, P.R, “Allied Mathematics”, Reprint,Margham Publications, Chennai. ● Venkata chalapathy, S.G, “Allied Mathematics”, Margham Publications, Chennai. Textbook references ● Singaravelu, A. “Allied Mathematics”, Meenakshi Agency, Chennai. ● Hennoy G.V. and Srivastava U.K., “Operation Research for Management”, Wiley Eastern,1994. 196 CU IDOL SELF LEARNING MATERIAL (SLM)
● Bazara M.J., Jarvis and Sherali H., “Linear Programming and Network Flows”, John Wiley,1990. ● Philip D.T. and Ravindran A., “Operations Research”, John Wiley, 1992. ● Hillier and Libeberman, “Operations Research”, Holden Day, 1986 197 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT - 7: LINEAR PROGRAMMING 2 STRUCTURE 7.0 Learning Objectives 7.1 Introduction 7.2 Objective function 7.3 Optimization 7.4 Summary 7.5 Keywords 7.6 Learning Activity 7.7 Unit End Questions 7.8 References 7.0 LEARNING OBJECTIVES After studying this unit, you will be able to: ● Describe the fundamentals of linear programming, as well as its benefits and drawbacks. ● Examine the findings, paying special attention to sensitivity and the diagnosis of uncommon events. 198 CU IDOL SELF LEARNING MATERIAL (SLM)
● Describe an optimization study from conception to conclusion, including the creation of a formal report. 7.1 INTRODUCTION In mathematics, a linear programming problem is a system for determining the maximum or lowest value of any variable in a function; it is also known as an optimization problem. LPP aids in the development and solution of a decision-making problem using mathematical methods. The problem is usually expressed as a linear function that must be optimised while adhering to a set of restrictions. The most common application of LPP is in counselling management on how to make the most efficient and effective use of limited resources. In mathematical terms, the problem above can be written as: Maximize F(X1, X2, …, Xn)Such that it meets the constraints C1, C2, …, Cm This type of formulation is called optimization or mathematical programming. There is an objective function to be maximized (i.e. profit) or minimized (i.e. cost, loss, risk of some undesirable event, etc.) X’s are the decision variables. They are the things we can adjust. For example, each X can be a driver. Xi=1 means the driver i is selected and will be sent to the customer. Xi=0 means he is not selected. C’s are the constraints. For instance, each car has a distance from potential customers. Drivers can drive only for so many hours a day. Each road has a speed limit and each car has a maximum number of passengers it can take. This is the most common example of Operations Research. Most of the problems in OR fall into one of the three problem categories. 1. Optimization 199 CU IDOL SELF LEARNING MATERIAL (SLM)
● Mathematical programming: We choose decision variables (drivers to dispatch), an objective function (maximising profit), and a set of constraints (physical, technological, economic, environmental, legal, societal, etc.) similar to the Uber example above. Then we use mathematics to solve them. ● Numerical optimization: There are two types of numerical optimization: gradient- based and non-gradient-based. One of the most prominent optimization methods in Machine Learning is the Gradient Descent, which is a gradient-based (as the name suggests) optimization. Non-gradient algorithms (derivative-free optimization) include Bayesian optimization, Cuckoo search, Genetic algorithms, and many others. ● Non-gradient algorithms are used when the objective function is not smooth, or a closed form of the objective function is not available. Optimization and Machine Learning are inextricably linked. Many machine learning tasks are expressed as the minimising of a loss function. The optimization algorithm minimises the loss on the training set during training. However, the ultimate purpose of machine learning is to reduce the amount of data that is lost. As a result, Machine Learning is a problem of optimization with the goal of \"generalisation.\" 2. Probabilistic Modeling: A probabilistic model generates a probability distribution, while a deterministic model generates a single probable event outcome. 200 CU IDOL SELF LEARNING MATERIAL (SLM)
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261
- 262
- 263
- 264
- 265
- 266
- 267
- 268
- 269
- 270
- 271
- 272
- 273
- 274
- 275
- 276
- 277
- 278
- 279
- 280
- 281
- 282
- 283
- 284
- 285
- 286
- 287
- 288
- 289
- 290
- 291
- 292
- 293
- 294
- 295
- 296
- 297
- 298
- 299
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
- 309
- 310
- 311
- 312
- 313
- 314
- 315
- 316
- 317
- 318
- 319
- 320
- 321
- 322
- 323
- 324
- 325
- 326
- 327
- 328
- 329
- 330
- 331
- 332
- 333
- 334
- 335
- 336
- 337
- 338
- 339
- 340
- 341
- 342
- 343
- 344
- 345
- 346
- 347
- 348
- 349
- 350
- 351
- 352
- 353
- 354
- 355
- 356
- 357
- 358
- 359
- 360
- 361
- 362
- 363
- 364
- 365
- 366
- 367
- 368
- 369
- 370
- 371
- 372
- 373
- 374
- 375
- 376
- 377
- 378
- 379
- 380
- 381
- 382
- 383
- 384
- 385
- 386
- 387
- 388
- 389
- 390
- 391
- 392
- 393
- 394
- 395
- 396
- 397
- 398
- 399
- 400
- 401
- 402