Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore Fundamentals of robotic mechanical systems_ theory, methods, and algorithms-Springer (2003)

Fundamentals of robotic mechanical systems_ theory, methods, and algorithms-Springer (2003)

Published by Willington Island, 2021-07-03 02:58:54

Description: Modern robotics dates from the late 1960s, when progress in the development of microprocessors made possible the computer control of a multiaxial manipulator. Since then, robotics has evolved to connect with many branches of science and engineering, and to encompass such diverse fields as computer vision, artificial intelligence, and speech recognition. This book deals with robots - such as remote manipulators, multifingered hands, walking machines, flight simulators, and machine tools - that rely on mechanical systems to perform their tasks. It aims to establish the foundations on which the design, control and implementation of the underlying mechanical systems are based. The treatment assumes familiarity with some calculus, linear algebra, and elementary mechanics; however, the elements of rigid-body mechanics and of linear transformations are reviewed in the first chapters, making the presentation self-contained. An extensive set of exercises is included. Topics covered include: kin

Search

Read the Text Version

26 2. Mathematical Background i.e., if a 3×3 array [A] is defined in terms of the components of u, v, and w, in a given basis, then the first column of [A] is given by the three components of u, the second and third columns being defined analogously. Now, let Q be an isometry mapping the triad {u, v, w} into {u , v , w }. Moreover, the distance from the origin to the points of position vectors u, v, and w is given simply as u , v , and w , which are defined as √√ √ u ≡ uT u, v ≡ vT v, w ≡ wT w (2.14) Clearly, u = u, v = v, w = w (2.15a) and det [ u v w ] = ±det [ u v w ] (2.15b) If, in the foregoing relations, the sign of the determinant is preserved, the isometry represents a rotation; otherwise, it represents a reflection. Now, let p be the position vector of any point of E3, its image under a rotation Q being p . Hence, distance preservation requires that pT p = p T p (2.16) where p = Qp (2.17) (2.18) condition (2.16) thus leading to QT Q = 1 where 1 was defined in Section 2.2 as the identity 3 × 3 matrix, and hence, eq.(2.18) states that Q is an orthogonal matrix. Moreover, let T and T denote the two matrices defined below: T = [u v w], T = [u v w ] (2.19) from which it is clear that T = QT (2.20) Now, for a rigid-body rotation, eq.(2.15b) should hold with the positive sign, and hence, det(T) = det(T ) (2.21a) and, by virtue of eq.(2.20), we conclude that det(Q) = +1 (2.21b) Therefore, Q is a proper orthogonal matrix, i.e., it is a proper isometry. Now we have Theorem 2.3.1 The eigenvalues of a proper orthogonal matrix Q lie on the unit circle centered at the origin of the complex plane. TLFeBOOK

2.3 Rigid-Body Rotations 27 Proof: Let λ be one of the eigenvalues of Q and e the corresponding eigen- vector, so that Qe = λe (2.22) In general, Q is not expected to be symmetric, and hence, λ is not neces- sarily real. Thus, λ is considered complex, in general. In this light, when transposing both sides of the foregoing equation, we will need to take the complex conjugates as well. Henceforth, the complex conjugate of a vector or a matrix will be indicated with an asterisk as a superscript. As well, the conjugate of a complex variable will be indicated with a bar over the said variable. Thus, the transpose conjugate of the latter equation takes on the form e∗Q∗ = λe∗ (2.23) Multiplying the corresponding sides of the two previous equations yields e∗Q∗Qe = λλe∗e (2.24) However, Q has been assumed real, and hence, Q∗ reduces to QT , the foregoing equation thus reducing to e∗QT Qe = λλe∗e (2.25) But Q is orthogonal by assumption, and hence, it obeys eq.(2.18), which means that eq.(2.25) reduces to e∗e = |λ|2e∗e (2.26) where | · | denotes the modulus of the complex variable within it. Thus, the foregoing equation leads to |λ|2 = 1 (2.27) thereby completing the intended proof. As a direct consequence of Theo- rem 2.3.1, we have Corollary 2.3.1 A proper orthogonal 3 × 3 matrix has at least one eigen- value that is +1. Now, let e be the eigenvector of Q associated with the eigenvalue +1. Thus, Qe = e (2.28) What eq.(2.28) states is summarized as a theorem below: Theorem 2.3.2 (Euler, 1776) A rigid-body motion about a point O leaves fixed a set of points lying on a line L that passes through O and is parallel to the eigenvector e of Q associated with the eigenvalue +1. A further result, that finds many applications in robotics and, in general, in system theory, is given below: TLFeBOOK

28 2. Mathematical Background Theorem 2.3.3 (Cayley-Hamilton) Let P (λ) be the characteristic poly- nomial of an n × n matrix A, i.e., P (λ) = det(λ1 − A) = λn + an−1λn−1 + · · · + a1λ + a0 (2.29) Then A satisfies its characteristic equation, i.e., An + an−1An−1 + · · · + a1A + a01 = O (2.30) where O is the n × n zero matrix. Proof: See (Kaye and Wilson, 1998). What the Cayley-Hamilton Theorem states is that any power p ≥ n of the n × n matrix A can be expressed as a linear combination of the first n powers of A—the 0th power of A is, of course, the n × n identity matrix 1. An important consequence of this result is that any analytic matrix function of A can be expressed not as an infinite series, but as a sum, namely, a linear combination of the first n powers of A: 1, A, . . . , An−1. An analytic function f (x) of a real variable x is, in turn, a function with a series expansion. Moreover, an analytic matrix function of a matrix argument A is defined likewise, an example of which is the exponential function. From the previous discussion, then, the exponential of A can be written as a linear combination of the first n powers of A. It will be shown later that any proper orthogonal matrix Q can be represented as the exponential of a skew-symmetric matrix derived from the unit vector e of Q, of eigenvalue +1, and the associated angle of rotation, as yet to be defined. 2.3.1 The Cross-Product Matrix Prior to introducing the matrix representation of a rotation, we will need a few definitions. We will start by defining the partial derivative of a vector with respect to another vector. This is a matrix, as described below: In general, let u and v be vectors of spaces U and V, of dimensions m and n, respectively. Furthermore, let t be a real variable and f be real-valued function of t, u = u(t) and v = v(u(t)) being m- and n-dimensional vector functions of t as well, with f = f (u, v). The derivative of u with respect to t, denoted by u˙ (t), is an m-dimensional vector whose ith component is the derivative of the ith component of u in a given basis, ui, with respect to t. A similar definition follows for v˙ (t). The partial derivative of f with respect to u is an m-dimensional vector whose ith component is the partial derivative of f with respect to ui, with a corresponding definition for the partial derivative of f with respect to v. The foregoing derivatives, as all TLFeBOOK

2.3 Rigid-Body Rotations 29 other vectors, will be assumed, henceforth, to be column arrays. Thus,  ∂f /∂u1   ∂f /∂v1  ∂f ≡  ∂f /∂u2  , ∂f ≡  ∂f /∂v2  (2.31) ∂u ... ∂v ... ∂f /∂um ∂f /∂vn Furthermore, the partial derivative of v with respect to u is an n × m array whose (i, j) entry is defined as ∂vi/∂uj, i.e.,  ∂v1/∂u1 ∂v1/∂u2 · · · ∂v1/∂um  ∂v ≡  ∂ v2/∂ u1 ∂v2/∂u2 ··· ∂ v2/∂um  ∂u ... ... ... ... (2.32) ∂vn/∂u1 ∂vn/∂u2 · · · ∂vn/∂um Hence, the total derivative of f with respect to u can be written as df ∂f ∂v T ∂f (2.33) =+ du ∂u ∂u ∂v If, moreover, f is an explicit function of t, i.e., if f = f (u, v, t) and v = v(u, t), then, one can write the total derivative of f with respect to t as df ∂f ∂f T du ∂f T ∂v ∂f T ∂v du (2.34) =+ + + dt ∂t ∂u dt ∂v ∂t ∂v ∂u dt The total derivative of v with respect to t can be written, likewise, as dv = ∂v + ∂v du (2.35) dt ∂t ∂u dt Example 2.3.1 Let the components of v and x in a certain reference frame F be given as   v1 x1 [ v ]F =  v2  , [ x ]F =  x2  (2.36a) v3 x3 Then  (2.36b) Hence, v2x3 − v3x2 (2.36c) [ v × x ]F =  v3x1 − v1x3  v1x2 − v2x1  0 −v3 v2 ∂(v × x) =  v3 0 −v1  ∂x F −v2 v1 0 TLFeBOOK




























































































Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook