Prévia do material em texto
<p>PROBLEMS AND SOLUTIONS IN MATHEMATICAL PHYSICS</p><p>HOLDEN-DAY SERIES IN MATHEMATICAL PHYSICS Julius J. Brandstatter, Editor V. V. Bolotin, Dynamic Stability of Elastic Systems C. Carathéodory, Calculus of Variations and Partial Differential Equations of the First Order Part I: Partial Differential Equations of the First Order Part II: Calculus of Variations Y. Choquet-Bruhat, Problems and Solutions in Mathematical Physics F. Funk, Calculus of Variations and its Application in Physics and Engineering S. G. Lekhnitskii, Theory of Elasticity of an Anisotropic Elastic Body S. G. Mikhlin, The Problem of a Minimum of a Quadratic Functional D. A. Pogorelov, Fundamentals of Orbital Mathematics A. N. Tychonov and A. A. Samarski, Partial Differential Equations of Mathematical Physics M. M. Vainberg, Variational Methods for the Study of Nonlinear Operators</p><p>Y. Choquet-Bruhat PROBLEMS AND SOLUTIONS IN MATHEMATICAL PHYSICS Translated by C. Peltzer Translation Editor J. J. Brandstatter HOLDEN-DAY, INC. San Francisco, London, Amsterdam</p><p>I would like to thank all my colleagues who sent me problems and especially Messrs Campbell, Chaillou, Mr. and Mrs. Hennequin, Messrs Martineau, Morel and Pisot who also gave me their solutions. C Copyright 1967 by Holden-Day, Inc., 500 Sansome Street, San Francisco, California. All rights reserved. No part of this book may be reproduced in any form, by mimeograph or any other means, without permission in writing from the publisher. Library of Congress Catalog Card Number: 66-17892 Printed in the United States of America</p><p>TO THE MEMORY OF MY FATHER</p><p>EDITOR'S PREFACE This elegant text will introduce students of physics, applied mathematics, and engineering to some of the modern mathematical concepts and techniques which are rapidly assuming increased importance in the physical sciences. These methods and concepts have their roots in the theory of abstract linear spaces. Each problem is followed by its solution which quickly orients the reader to the appropriate technique and associated concepts. Many of the problems are drawn from physics and engineering and are of varying degree of complexity. The selection and organization of the material is a testimony to the ingenuity and experience of Professor Choquet-Bruhat both as a teacher and mathematician. For the student to profit most from this text the simultaneous reading of the "natural companion" to it, namely the book by Lichnerowicz (see bibliography), as well as that by Schwartz is recommended. Besides these, some additional references have been included for those who wish to cover some of the subject matter in greater detail. These references were chosen on the basis of accessibility and comprehension. The text can be used in mathematics, physics, and engineering departments in those courses which stress mathematical methods for the physical sciences. It is felt that this unique text will lead to the appreciation of the role of modern mathematical methods in the formulation and practical solution of complex physical problems.</p><p>FOREWORD The initiation of this collection of volumes* is due to Georges Bruhat. Deeply aware of the change that all science was undergoing, this eminent physicist had diagnosed very early what was lacking in our traditional higher education in mathematics: a certain way of conceiving of mathematics-may I say a certain virtue-which would lead the non-specialists to use mathematics and to build with it the real world. In those not distant times, engineers and physicists were somewhat afraid of the magnificent edifice of mathematical analysis and avoided using it as much as possible. The mathematics used by physicists often consisted only of the program of "mathématiques spéciales" (advanced calculus). Beyond that lay unknown territory. Higher education in mathematics in France was for a long time designed for mathematicians, mainly for analysts, and it gave rise to a splendid school of analysis. But entire branches of mathematics, some already old, had dis- appeared or were disappearing from the normal curriculum: linear algebra, spe- cial functions, different aspects of the theory of functional operations, Fourier transforms, and differential equations. These branches, which are among the most useful to physicists, would have to be taught so that they could be used with joy and success, and hence would free them from being viewed with distrust and psychological complexes. Under the influence of Georges Darmois a program was created which quickly spread to all Science Departments and split into "mathematical techniques in physics" and "mathematical methods in physics." Under the same influence we saw this collection grow and provide the future theore- ticians with indispensable tools. Simultaneously "pure" mathematics was rediscovering paths temporarily abandoned. The works of Jean Leray or those of Laurent Schwartz were revitalising subjects of deep interest to the physicist and arming him with powerful weapons. In our collection an essential work was still lacking. The non-specialists who want to use contemporary mathematics need, even more than others, to train themselves, to have at their disposal problems with clear and elegant solutions which will allow them to deepen their understanding of this or that theory. Madame Choquet-Bruhat, a professor at the Sorbonne, accepted upon my request the task of filling this void. She was well prepared to assume this arduous and delicate task: her constant interest in physics, her much appreci- ated contributions throughout the world to partial differential equations, the "Collection d'ouvrages de mathématiques a l'usage des physiciens" published by Masson & Cie, Paris, under the direction of A. Lichnerowicz.</p><p>theory of distributions, and general relativity, besides the programs that she created or taught in Marseilles, Rheims and Paris and finally her name which in itself is a testimony to the continuity of the task undertaken. True to the spirit of Georges Bruhat and Georges Darmois, this book will be, I am sure, the most precious working tool for many of our students. Lichnerowicz</p><p>CONTENTS Foreword vii PART ONE: LINEAR ALGEBRA AND ANALYSIS Linear mappings. Operations on matrices. 2 I. 1. Representations of linear mappings by matrices 2 2. Non-commutativity of the product 4 3. Product of symmetric matrices 4 4. Non-associativity of a column by column product 4 5. Lorentz rotations 5 6. Solutions of linear equations 6 7. Solutions of linear equations 7 8. Solutions of linear equations 8 9. Independence of linear equations (M.M.P. Paris test, 1955) 9 10. Quadrupoles (commutativity) (M.M.P. Marseille, 1956) 12 11. Computation of the inverse of a matrix (M.M.P. Lille, 1957) 13 12. Inverse matrix. Divisors of zero (M.M.P. Lille, 1957) 15 13. Group of matrices (M.M.P. Caen, 1957) 16 II. Proper values and proper vectors. Reduction of matrices. 20 14. Calculation of the proper values and proper vectors 20 15. A System of differential equations with constant coefficients 21 16. Group of matrices of dimension 2 (T.M.P. Paris, 1959) 22 17. Multiple proper values (M.M.P. Marseille, 1956) 25 18. Multiple proper values 28 19. Multiple proper values 30 20. Natural modes of vibration 32 21. Functions of a matrix 34 22. Necessary and sufficient condition for a matrix to have a complete system of proper vectors 38 23. Stochastic matrices 40 24. Proper values of mappings in the space of all second-degree polynomials 42 25. Maximal properties of the proper values 43 III. Scalar product and norm. Hermitian operators. 47 26. Equality of scalar products in arbitrary bases 47</p><p>27. Norms on the space of polynomials of degree 48 28. Norms on the space of real square matrices of dimension 2 50 29. Proper values and vectors of an hermitian matrix 51 30. Gram determinant 52 31. Study of electric circuits 54 32. Unitary operators 57 33. Product of a positive definite operator and unitary operator 60 34. Product of a positive definite operator and unitary operator 62 35. Matrix commuting with its adjoint 64 36. System of coupled oscillators 67 37. Matrix differential equations 72 38. Positive definite quadratic forms 74 39. Proper values of a matrix A relative a matrix B-Simultaneous reduction of quadratic forms 78 IV. Vector calculus. Multiple integrals. 83 40. Convergence of single and double integrals 83 41. Calculation of often appearing integrals 86 42. Divergence-free vector fields 90 43. Tensorial nature of the coefficients of a quadratic form 91 44. Rank of a tensor 92 45. Kronecker tensors 93 46. Contracted tensor 95 47. Space time of special relativity 96 48. Maxwell equations 99 49. Absolute differentials 102 50. Flux of the gradient of 1/r 103 51. Stokes formula 107 52. Stokes formula 112 53. Stokes formula 114 PART TWO: FUNCTION SPACES, INTEGRAL AND DIFFERENTIAL OPERATORS V. Function spaces and operators. 120 1. Vector space and subspaces 120 2. Vector space and subspaces 120 3. Distance in a space of continuous functions 122 4. Norms on a space of continuously differentiable functions 122 5. Space of continuous functions with the norm L1 124 6. Regulated functions 125 7. Convergence of a series of functions in the sense of the norm L1 127 8. Summable families 128 9. Pre-Hilbert inner product and norm 130</p><p>131 10. Convolutions of functions in L2 11. Convergence of a sequence in a Hilbert space 132 12. Hilbert space of square summable sequences 133 13. System of linear equations in a Hilbert space 136 14. Primitive operator-Iteration 141 144 15. Projectors 146 16. Integral operators 17. Fredholm integral operator 151 18. Hermitian operator-Proper functions 158 19. Adjoint operator 162 20. Operator such that A*A - AA* = I 163 21. Expansion in series of inverse operators in a Banach space 167 22. Mellin and Fourier transforms 171 23. Fourier transform of 176 24. Fourier transform of 178 25. Fourier transform of ta and convolution product 179 VI. Series expansions of functions. 187 26. Calculation of a Fourier series expansion 187 27. Expansion in cosine series 187 28. Expansion in sine series 188 29. Convergence of a Fourier series expansion 188 30. Exponential Fourier series 189 31. Exponential Fourier series and Chebychev polynomials 193 32. Fourier expansion and Bessel functions 195 33. Differentiation of a Fourier series 196 34. Differentiation of a Fourier series 198 35. Fourier expansion on the circle-Differentiation 202 36. Dirichlet kernel 206 37. Bessel functions and Fourier transform of a function 209 38. Series expansion in Legendre polynomials 211 39. Chebychev polynomials 212 40. Laguerre polynomials 215 41. Laguerre polynomials 215 42. Hermite polynomials 219 VII. Differential equations. 221 43. Bessel equations 221 44. Rotating string 223 45. Legendre equation 225 46. Legendre equation 227 47. Rotating beam (proper values and proper functions) 231 48. Bending of a beam (Fourier series and Laplace transform) 233 49. Deformation of a beam (Fourier transform) 238 50. Volterra and Fredholm integral equations associated with a dif- feretial equation 240</p><p>51. Integral equation associated with a differential equation 244 52. Solution of a differential equation by Laplace transform 246 53. Solution of an integro-differential equation by Laplace transform 248 54. Passage of a vehicle over an obstacle over an obstacle 249 55. Exercise on the beta and gamma functions 251 56. Laplace transforms of Bessel functions 253 VIII. Partial differential equations. 255 57. Equation of a vibrating string 255 58. Cylindrical wave equation 258 59. Study of certain distributions, solutions of the vibrating string equation 262 60. Telegrapher's equation 267 61. Oscillations of an elastic string 272 62. Laplacian in two variables 277 63. Study of Au = 280 64. Newtonian potential of a circumference 283 65. Electrostatic potential Legendre functions 286 66. Study with the help of distributions of + = 0 290 67. Propagation of heat in a bar 294 68. Propagation of heat in a cylinder 296 69. Propagation of heat in a cylinder and Bessel functions 299 70. Heat propagation in a circular plate 304 71. Heat conduction in a sphere 308 72. Solution by Laplace transform of the wall problem (theory of heat) 310</p><p>Part One Linear Algebra and Analysis</p><p>LINEAR MAPPING AND OPERATIONS ON MATRICES 1. Representations of linear mappings by matrices Let be the vector space of all third degree polynomials in one variable x. Represent by a matrix: 1. the linear mapping which associates with each polynomial P(x) the polynomial P(x + 1). 2. the linear mapping which associates with each polynomial P(x) its derivative with respect to 3. the linear mappings respectively sum and product of the preceding ones. Solution 1. The mapping is indeed linear: if R(x) = P(x) + Q(x) one has R(x+1) P(x + + and if R(x) = 2P(x) one has R(x + 2P(x + 1). To represent it by a matrix, let us take as basis of E the four powers of x x3, x2, x, 1 and calculate the transforms of these basis vectors: x2 x x+1 1 1. The matrix representative of the given linear mapping is obtained by taking as columns the components of the transforms of the basis vectors One finds the components of the polynomial P(x+1), the transform by A of the polynomial P(x) with components a, b, c, d P(x) = ax3 + by writing 2</p><p>I. LINEAR MAPPINGS AND OPERATIONS ON MATRICES 3 a 3a+b C a+b+c+d One verifies that indeed a(x+1)3 + b(x + 1)2 + 2. The mapping is obviously linear; the transforms of the basis vectors 1 are now x3 3x2 x2 2x x 1 1 0. The matrix representative of the mapping, considered as a mapping of the four-dimensional space E of third degree polynomials into itself, is given in this basis by 0000 3000 0200 This matrix is singular (determinant zero); it maps E into a vector subspace of E (the three-dimensional vector space of all second degree polynomials). 3. The sum of the linear mappings considered associates with each polynomial P(x) the polynomial P(x + 1) + P'(x); its representative matrix, in the basis chosen for E, is the sum of the matrices A and B and is obtained by adding pairwise corresponding elements: 1000 1000 3100 3000 6100 3210 0200 3410 1111 1121 The product of the linear mappings associates with each polynomial P(x), depending on the order in which they are performed, either the polynomial + 1), or where x is to be replaced by x+1. The result here does not depend on the order of the operations. One finds the matrix representative of the product by performing the operation 3000 D=AB= BA = 6200 3210</p><p>4 LINEAR ALGEBRA AND ANALYSIS 2. Non-commutativity of the product Consider the vector space E of all first degree trigonometric polynomials in one variable 0: P(0) = a 0 + b sin 0 + Represent by a matrix: 1. the linear mapping which associates with P(0) the polynomial -0). 2. the linear mapping which associates with P(A) its derivative with respect to 0. 3. the linear mappings which are the products of the two preceding ones. Solution 1. A basis of the vector space of all polynominals P(0) is formed by the three linearly independent polynomials cos 0, sin 0, 1. In this basis, the matrix representative of the mapping (obviously linear) is 2. In the basis of the vector space in 1, the linear mapping which as- sociates with P(A) its derivative has for representative matrix 3. The product is obviously not commutative since 3. Product of symmetric matrices Let A and B be two square symmetric matrices of dimension n. Show that the product AB is symmetric if and only if it is commutative. Solution If C = AB, the transposed matrix is BA since A and B are symmetric. Thus, C = C' if and only if BA. 4. Nonassociativity of a column by column product Denote by the element located in the ith column and jth row of an arbitrary square matrix A of dimension n. One defines a C = AoB of two square matrices of dimension n, A and B, by the formula* Such a product does not correspond to the product of the linear mappings represented by the matrices.</p><p>I. LINEAR MAPPINGS AND OPERATIONS ON MATRICES 5 Show that such a product is not associative. Solution element of the jth column and kth row of B, is the element (kth column, jth row) of the transposed matrix B'. So the product defined here is given in terms of the usual product (rows of B' by columns of A) by: C Let us take the product of three matrices A, B, C; we have = = C'B'A and Ao(BoC) = (BoC)'A = (C'B)'A = B'CA which in general are not equal. 5. Lorentz rotations Show that the matrices depending on a parameter form a group (Lorentz rotations). Solution The product of two such matrices is (1 or setting we find i.e., a matrix of the given form. The unit matrix is also of the given form and the inverse of such a matrix</p><p>6 LINEAR ALGEBRA AND ANALYSIS is also of the same form. Thus, the set of given matrices is a subgroup of the group of regular two-dimensional matrices. 6. Solutions of linear equations Find the general solution of the system 1 4x1 = 1. Solution 1. Consider the matrix A formed by the coefficients of the unknowns: By calculation, we find that the determinant of A is zero, The associated homogeneous system 7x2 5x3 = 0 admits a solution determined to within a multiplicative factor. Let us take as principal unknowns X1, X2, and the first two equations as principal equa- tions. The principal determinant is then and the general solution of the homogeneous system is 2. Since the determinant obtained by bordering the principal determinant in the matrix A by the right-hand-side elements of the given equations is equal to zero, the complete system admits a one-parameter solution, which is the sum of a particular solution of the complete system and the general solution of the homogeneous system. Let us take as a particular solution of the complete system the following</p><p>LINEAR MAPPINGS AND OPERATIONS ON MATRICES 7 The required general solution of the complete system is then where is an arbitrary number. 7. Solutions of linear equations Consider the system of linear equations + X2 - 3x3 + 2x5 = 8 2x1 + 2x2 - X3 + 2x4 - Xs = 1 3x1 - + X5 = 3. 1. What is the dimension of the vector space of the solutions of the associated homogeneous system? Find a basis of this space. Write in this basis the general solution of the homogeneous system. 2. Find the general solution of the given system. Solution The associated homogeneous system can be written as AX = 0 (1) where X is the column vector with components , X2, and A is the four-by-five matrix 1 1 2 A 2 0 2 1 The rank of this matrix being at most four, the system (1) (in five un- knowns) certainly has solutions. Furthermore, one sees by calculating them that all the determinants of order four extracted from the array formed by A are null. But an extracted determinant of order three different from zero exists; we shall take it as principal determinant: D = = 15. 0 The matrix A is of rank three and the vector space of the solutions of (1) is of dimension 5 - 3 = 2. The homogeneous system (1) is satisfied if the principal equations are satisfied, i.e., if - 2x2 X3 X5 3x1 = 2x4 X5.</p><p>8 LINEAR ALGEBRA AND ANALYSIS Two independent solutions and X1 of these equations can be obtained by taking, for example, (a) for (b) for Xi: that The general solution of the homogeneous system, written in this basis in terms of the two arbitrary constants 7. and is that is, 2. The complete system will have solutions if the characteristic deter- minant is null. This can be verified 11 1 Its general solution is the sum of the general solution of the homo- geneous system (1) and a particular solution of the complete system, i.e., a particular solution of the corresponding principal equations. We shall choose this particular solution such that = xs=0. The equations to solve are then = 8 =3, and they have for solution X2 X1 = 1, Hence the required general solution is X2 i.e. = = 8. Solutions of linear equations Solve the system 2x + 4z = 4 2y - 3z = 4.</p><p>1. LINEAR MAPPINGS AND OPERATIONS ON MATRICES 9 Solution The matrix of the coefficients of the unknowns has determinant equal to zero: 0 7 2 1 2 -3 The associated homogeneous system has at least one nonzero solution. There exists a determinant of order two extracted from the array which is different from zero, e.g., which will be taken as principal determinant. The given system has a solution if the characteristic determinant is zero. This can be verified: 0 The given system has an infinity of solutions depending on one parameter, which are obtained by adding to the general solution of the associated homo- geneous system, i.e. (principal equations), 7y 2x a particular solution of the complete system, i.e. (principal equations), is 2x - 4z 0. Hence the general solution of the given system is 9. Independence of linear equations Let X be a real vector space, X* the set of all linear functionals on X. A set Y of vectors x of X is said to be convex if X1 E Y and X2 Y imply for every scalar p such that 1. Is a subset Y E X such that X1 E Y, X2 E Y implies + X2) Y convex? 2. Let (k = 1, be n elements of X*; consider the system S of n equations</p><p>10 LINEAR ALGEBRA AND ANALYSIS (k where the are given scalars and X is unknown. Show that a neces- sary and sufficient condition for S to admit at least one solution for any is that the be independent. 3. Let Y be the subset of X of all X such that for k = Show that Y is convex. Let Y* be the set of all E X* such that for every Show that is convex. 4. Show that if the are independent every x* E Y* is of the form where the are scalars 0. (M.M.P., Paris, written exam, 1955) Solution 1. In geometrical language, the convexity condition states that every point y of the segment (y = belongs to the set Y if X1 E X andx2 E Y. The condition given implies only that the "center" Y if Y and X2E Y and hence all points (1 - Y (k and n integers > 0, 1); SO that it is not in general equivalent to the convexity condition. But as every real number is the limit of a sequence of numbers of the form the subspace Y is convex if it is closed, for then the convergent sequence + (1 Y converges to a point in Y, (1 - p)x2 E Y, for every 2. (a) The condition is necessary. Let us assume that the are not independent, i.e., that there exist n numbers not all of them zero, such that Then, for every = which obviously cannot be true for all sets of numbers (b) The condition is sufficient. Let us show that we can choose vectors X1, X2, ..., Xn in such a way that = 1 The first equality is obviously possible. Since is not identically zero, there exists an x such that x> 0 and by scaling one obtains the required vector X1 = x>. Let us try next to find an X2 of the form * <x*, x> denotes the value of the linear functional x* for the vector</p><p>LINEAR MAPPINGS AND OPERATIONS ON MATRICES 11 One obtains for a and B the system 1 whose determinant is not zero for at least one vector x; otherwise and satisfying 0 would not be independent. To this vector x there corresponds a set a, B and hence a vector X2. Finally we prove the existence of Xp assuming the existence of We look for an Xp of the form The equations to be satisfied are, taking into account the ones satisfied by = 1. From these equations one obtains immediately then if which holds for at least one vector x since the are independent. To this vector x there corresponds a system B and therefore a vector Xp satisfying the required conditions. 3. If E Y and X2 E Y we have from the definition of Y k = For Y to be convex, it is necessary and sufficient that these inequalities imply for which is obviously true in view of the linearity in x of A similar reasoning making use of the linearity in x* of <x*, x> shows that Y* is convex. 4. If E Y*, we have by definition for every x such that and the n+1 linear forms cannot be independent since otherwise there would exist, according to 2, an x giving to the + 1 numbers arbitrary values and we would have x> with So, is of the form</p><p>12 ALGEBRA AND ANALYSIS where the are necessarily 0, as can be seen by taking for x a vector such that <x**, x> = 0 for h k and x> = 1. (Then <x*, x> 10. Quadrupoles (commutativity) 1. Show that a necessary and sufficient condition for a matrix B to commute with a matrix A having distinct proper values is that B admits as proper vectors the proper vectors of A (A and B are two square matrices of dimension 2. Let E2 be the input and output complex currents and voltages of a quadrupole. Let be the vectors of respective components We recall that these vectors are related to each other by the matrix equations = = where A, Z, C are respectively the admittance, impedance, and characteristic matrices of the quadrupole. (a) Consider the quadrupole (mesh filter)* whose impedance matrix is FIG. 1. Calculate its admittance and characteristic matrices. (b) Consider the quadrupole obtained by mounting in series two mesh filters of respective impedances (21, Z2) and Calculate the charac- teristic matrix of this quadrupole. Under which condition is it independent of the order of the two filters? Z3 mm Z1 Z, FIG. 2. (M.M.P., Marseille, 1956) * Commonly known also as a symmetric lattice network.</p><p>LINEAR MAPPINGS AND OPERATIONS ON MATRICES 13 Solution 1. If the proper values ..., An of A are distinct, the corresponding proper vectors X1, are linearly independent, and form a basis of the vector space on which A operates. In this basis, A is diagonal and is B if X1, , are also proper vectors of B; the matrices A and B then commute (AB = BA), they also commute in every basis (as do the linear transformations which they represent). So the condition stated is sufficient. We shall show now that it is also necessary. Let X be a proper vector of A: AX = and so BAX = ABX, or also, if A and B commute, ABX = ABX. Thus BX is a proper vector of A for the proper value and hence is collinear with X because, the proper values of A being distinct, a single proper direc- tion corresponds to each of them. It follows that BX = uX; that is, X is a proper vector of B. Q.E.D. 2. (a) The matrix A is the inverse of the matrix Z: A = Z2 From its definition and the value of Z one finds for C . (b) The characteristic matrix C of the quadrupole obtained by mounting in series the two quadrupoles of characteristic matrices C' and C" is the product of these matrices: C C'C" + - = + 24) (21 + Z2) + 24) + This matrix is independent of the order of the filters (C'C" = if = one verifies that this is the necessary and sufficient condition for C' and C" to have the same proper vectors. 11. Computation of the inverse of a matrix 1. We want to compute the inverse of the matrix 0.98 0.02 0 0.04 0 0.08 0.93 0.08 0.07 0.03 0.04 0.01 0.97 0.07 0.04 0.02 0.03 0 1.03 0 0.07 0.04 0 0.08 1.01</p><p>14 LINEAR ALGEBRA AND ANALYSIS by writing it in the form A where the elements of A are small and by using the series I + A + + Each element of the inverse matrix is to be obtained to within 0.01. Give the precise arguments and the calculations allowing one to obtain this result. 2. Assume that all elements of a square matrix B of dimension n are smaller in absolute value than M; give a condition on M sufficient for the series to converge and to have for sum 3. Show by an example that the sufficient condition found is not a necessary condition. (M.M.P., Lille, 1957) Solution 1. We consider first part 2: 2. If all elements of the matrix B of dimension n are smaller in absolute value than M, then for an arbitrary vector X, if denotes its Euclidean norm, we have Indeed, for every component BX we have hence From this one concludes that the norm II of the linear mapping defined by B satisfies the relation For the convergence of the series + it is suf- ficient that the remainder + B*+2 + converges to zero in the norm. (Then every element of the corresponding matrix tends to zero.) But For the right-hand side to converge to zero, it is necessary and sufficient that nM<1 or and so this condition is a sufficient condition for the convergence of the series + + + (Its sum is then (I - - as can be seen by calculating the product S(I - B) = I.) 3. An example of a convergent series + (also having for sum (I which does not satisfy the preceding condition is given by the matrices B whose squares are equal to 0. For example, for n=2 and arbitrary M > 0:</p><p>LINEAR MAPPINGS AND OPERATIONS ON MATRICES 15 = Another example is given by the diagonal matrices; e.g., for and M>0 + 0 + if M and not necessarily 2M < 1. 4. In the required calculation, M = 0.08. The series converges (A is of dimension five). We obtain a better upper bound, than by the norm of the remainder, on the error made on each element by noting that if ap is an element of that < if = 4. 0.01 Therefore, it suffices to calculate 12. Inverse matrix-divisors of zero Find the matrices B such that A = BC for the two following cases: 1 1. A = 0 0 2. = 5 C = 3 3 (M.M.P., Lille, 1957) Solution 1. If C has an inverse, we obtain B by multiplying A on the right by C-1: AC-1 = B. In the first case C has an inverse since its determinant is not null: 3 1 3 5 2 1 B = 0 3</p><p>16 LINEAR ALGEBRA AND ANALYSIS 2. The matrix C does not have an inverse; it is singular (null deter- minant): If the equation A = BC has a solution, it has an infinity of them which are obtained by adding to one of its solutions the general solution of the homogeneous equation O = DC. Since C is singular and of rank two, it transforms the given three- dimensional vector space into a two-dimensional subspace. For DC = it is necessary and sufficient that the linear mapping represented by D be null on this vector subspace (six conditions). So D depends on three parameters. We obtain the general solution of DC = by taking as rows of D, to within a factor, the minors corresponding to the elements of the columns of C: D = 32 We seek now a particular solution of A = BC and look for a B of the form B = 0 0 BC = - b3 By writing BC = A, we obtain the compatible equations: - b1 a - b1 + 2b2 = 8 solution: b2 1 b2 + 2b3 = 4 solution: - b3 3 b=1. 13. Group of matrices Let Rn be a vector space of dimension n. A basis is chosen in Rn, and A is the linear mapping of Rn into Rn, defined by the square matrix of dimension n,</p><p>I. LINEAR MAPPINGS AND OPERATIONS ON MATRICES 17 At = the element of the jth row and ith column of being equal to if i j and to zero if i > j (where C} denote the binomial coefficients). 1. To every real number k, we associate the vector bk E Rn with com- ponents 1, k, Verify that 2. Denoting by E the subspace generated by the vectors bx, (- show that if y E, then 3. Show that E=Rn. Deduce from this that = and find the inverse of the matrix At. Does the set of matrices At form a group? 4. If calculate D. Show that and deduce from this that = 0. Does this imply that Setting show (by using Taylor's formula) that Can be deduced from this? Verify that A, satisfies the relation Remark: The solution of this problem requires only a few lines of cal- culation. (M.M.P., Caen)</p><p>18 LINEAR ALGEBRA AND ANALYSIS Solution 1. Verification (use binomial formula). 2. Since , therefore = = hence, for any vector in the subspace generated by the vectors bk = 3. To show that E = Rn, it is sufficient to show that there exist n linearly independent vectors bk; but if we choose n distinct numbers k the corresponding vectors bk are linearly independent since the associated deter- minant is nonzero: 1 1 1 kn = Thus, for every vector y of Rn, we have = i.e., the matrix equation A..A = The unit matrix I is a matrix The inverse of At, such that is The matrices A form a group. 4. = From this, it follows that Dbk is the vector (0, 1, 2k, 3k2 ...), i.e., We show now, by recurrence, that</p><p>I. LINEAR MAPPINGS AND OPERATIONS ON MATRICES 19 Assuming that we then obtain is the vector whose mth component is given by - - From the expression for D, we see then that has for mth component - - 2) (m - p)km-p-1 This is indeed the mth component of the pth derivative of the vector Hence: From this, it follows that = 0 (since = 0), that the matrix D" is identically zero since the vectors bk generate Rn. So, we obtain which is the Taylor expansion in the neighborhood of t = 0 of the vector function in one variable Thus and since bk generates Rn. We note that At satisfies the matrix differential equation whose solution, equal to the unit matrix at t = 0, is precisely =</p><p>PROPER VALUES AND PROPER VECTORS REDUCTION OF MATRICES 14. Calculation of the proper values and proper vectors Find the proper values and the corresponding proper vectors of the matrix A = 2 0 1 Solution The proper values are the roots of the determinant I A - where I is the unit matrix. Here -1 0 0 -1 We obtain The roots of 4(2) = 0 are = 0, = 2, = + /2, = V2. The corresponding proper vectors with components X1, X2, X3, are solu- tions of the system - 2x3 - Their directions are defined by 1. = 0 : 3. = - X1 4. = 2 (1 (1 - 20</p><p>PROPER VALUES AND PROPER VECTORS REDUCTION OF MATRICES 21 Remark: The matrix A is real and symmetric, the proper values are real, and the proper vectors are orthogonal. 15. A system of differential equations with constant coefficients Find the general solution of the system of differential equations (1) Solution We seek a solution of the form v, r constants) = Here, must satisfy the homogeneous linear system of equations 0 - 2(3 - 3w = 0 (2) - 3v + 4(1 - r)w These equations will have a nonzero solution in only if r is such that the determinant of the coefficients is null (characteristic equation) = - 0. -3 4(1 This equation has three distinct roots: = = 1, = The corresponding solutions of the system (2) are 1. r1=0.25 - 3v = 0 - 3u + 2(3 - 0.5)v - 3w = 0 - 3v + 4(0.75)w = 0. The solution is determined to within a multiplicative factor. We choose the particular solution = 1, = + 1.5 = + 1.5. 2. r2=1 We take = 1, v=0, = 1. 3. We take 1, - 1.5.</p><p>22 LINEAR ALGEBRA AND ANALYSIS The general integral of the system, linear combinations of the three indepen- dent particular solutions found above, is x where a, b, C are three arbitrary constants. 16. Group of matrices of dimension 2 We denote by a and b two real or complex parameters, with a # 0, b # 0, and consider* a A 1. Find the proper values of the matrix A and its proper vectors. 2. Find the invertible matrices T = T(a,b) such that B = is a diagonal matrix; show that B does not depend on the parameter b. 3. Show that for a fixed value of b the matrices A(a, b) corresponding to the different values of a form, with respect to matrix multiplication, a group which is isomorphic to the additive group of the parameter a. 4. Consider the matrix where a, B, r are real or complex numbers satisfying D(M) = 1, Infer from the preceding a computational scheme for M". (T.M.P., Paris, 1959) Solution 1. X is a proper vector of the matrix A if there exists a number real or complex, such that where X is not the null vector. If X1, X2 are the components of X, the pre- ceding equation is equivalent to the two equations (1) which have a nonzero solution only if the determinant of the coefficients is null: * The convention used here is ch indicates hyperbolic cosine and sh indicates hyper- bolic sine.</p><p>II. PROPER VALUES AND PROPER VECTORS REDUCTION OF MATRICES 23 = This second-degree equation in 2 has two roots, 21, = sh a . (a) a The corresponding proper vector is defined to within a factor by the first equation (1) which is equivalent to the second one: ch + b hence, since (b) The proper vector determined to within a factor is ch + 2. The matrix B = is obtained from the matrix A by a change of basis defined by the matrix T. Its proper vectors and proper values are the same as those of A. On the other hand, we know that if B is diagonal the basis in which it is given is formed from proper vectors. Thus, we see that the matrix T has for columns the components of the proper vectors of A in the initial basis. Hence, T is given by where k1, k2 are constants. The matrix B = is the diagonal matrix, which is independent of b; this can be verified by carrying out the multi- plications 3. The proper vectors of the matrix A(a,b) do not depend on a. For a fixed value of b, the matrices A(a, b) are transformed by a change of basis defined by the matrix</p><p>24 LINEAR ALGEBRA AND ANALYSIS into the matrices These matrices form a group, with respect to matrix multiplication, isomorphic to the additive group of the parameter a. Indeed, which can also be written 4. Consider the matrix with 1. We can always take where a is real or complex, hence Since b is also a real or complex number, we have further and so It follows that the matrix M is a matrix A(a,b); under the change of co- ordinates defined by T it takes the form and One could obtain by the inverse coordinate change M" = But, more simply, one notices that if B=B(a), then = B(na) and therefore</p><p>PROPER VALUES AND PROPER VECTORS REDUCTION OF MATRICES 25 17. Multiple proper values 1. Find the proper values of the matrix A, 2 A = 2 0 and the corresponding proper vectors. Can one diagonalize the matrix? If so, by what coordinate change? 2. Find the general solution of the system of differential equations. dy 2z 3. Find the solution of the preceding system determined by the initial conditions at t = 4. Obtain the result of 3 by using the Laplace transform. (M.M.P., Marseille, 1956) Solution 1. The characteristic equation of A is obtained by setting equal to zero the determinant I A - I where I is the unit matrix: 4(2) = 0 2 4(2) + 2) 0. The corresponding proper vectors are: (a) = - 3. The components X1, X2, X3 of the proper vector X, which is a solution of (A + 31). X=0, must satisfy the equations 5x1 2x1 + 2x3 = 0 These three compatible homogeneous equations have a solution, unique to within a factor; hence there is one and only one proper direction defined by = X3, X2 = 2xs. (b) = 1, a double proper value. The components X1 X2, X3 of the corresponding proper vectors X satisfy the equations</p><p>26 LINEAR ALGEBRA AND ANALYSIS (A 2x1 4x2 + 2x3 = 0 - X1 + 2x2 - X3 Since three equations reduce to a single one, the proper vectors form a two- dimensional vector space: namely, the set of all vectors in the plane Thus we can choose two linearly independent vectors, for example, (E2) and (E3) The matrix A admits three linearly independent proper directions (the two directions corresponding to = 1 and the one corresponding to 7. = - 3); it can be put in diagonal form by taking as basis vectors three vectors E3 along these directions. Let us take, for example, as components of these vectors :(1,2, - The corresponding matrix S transforming the new axes E into the old ones, e, is In the new axes, the matrix A takes the form B = One can verify that B is diagonal. We find 2 1 AS = B = = 2. The given differential system can be written in vector form, that is, if X is the vector with components X, then Its general solution is X =</p><p>PROPER VALUES AND PROPER VECTORS REDUCTION OF MATRICES 27 where E2, E3 are the three independent proper vectors of 1, corresponding to the proper values - 3 and 1 of A, i.e., = = 3. The initial conditions, at t = 0, determine a, through the equations which give 4. We write Lx = Ly=v, = The Laplace transform of the differential system and the initial conditions give for the equations su - SV - 1 2u 2w SW - 1 i.e., denoting by U and V the vectors with respective components (u,v,w) and (- 1,-1,-1), - = or - To calculate (A - one can use its expression in the proper axes which is C 0 0 From this, we obtain its expression in the given axes: (A - = Thus U is the vector with components</p><p>28 LINEAR ALGEBRA AND ANALYSIS and we obtain using the inverse Laplace transform which is indeed the solution found in 3. 18. Multiple proper values 1. Find the proper values and proper vectors of the matrix A = 2. Write the matrix A in a reduced form as simple as possible; make clear which transformation matrix is used. 3. Using the preceding calculations, find the solutions of the following differential systems: 2z where x, y, are the unknown functions, t the variable. (T.M.P., Lille, 1960) Solution 1. 3 1 Proper values: 2 2 (simple), 2 = 1 (double) (a) 2 = - 2 (A - = 0 gives Proper direction: X1 = X2 X3 arbitrary, i.e., 1. (b) 2=1 2 3 Only one proper direction: X2 2x1, = 20x2/3, i.e., = 3, X2 = X3 = 20.</p><p>II. PROPER VALUES AND PROPER VECTORS REDUCTION OF MATRICES 29 2. Since the matrix has only two linearly independent proper vectors it cannot be put in diagonal form. The simplest reduced form is obtained by taking as a basis for the vector space on which it acts, the two proper vectors E3 and E2 and a third vector E1 such that where 2 is the double proper value (here = 1). If y1, are the compo- nents of E1 in the original axes, this equation gives This matrix equation reduces to the two numerical equations 4y1 - = one solution of which is the vector E1 (independent of E3 and E2), with com- ponents 8. The transformation matrix, from the old basis to the new one, is The transforms of the basis vectors E1, E3 by the linear mapping are So, in the new basis the linear mapping is represented by the matrix 3. The general solution of the first (homogeneous) system written in matrix form = AX, is (denoting by X the vector with components x, X = where a, B, r are arbitrary constants. Indeed = dt and in view of the transforms by A of E3 AX = - The general solution of the second system (inhomogeneous) is obtained by adding to the preceding general solution X a particular solution of the com-</p><p>30 LINEAR ALGEBRA AND ANALYSIS plete system. We find for this particular solution (polynomial in t) the vector with components = - yo = 6t - 3, Zo = By giving the components explicitly, we obtain the general solution x = y = 3 2 = ae-2t + r( 8 + - + 27 19. Multiple proper values Consider in the three dimensional vector space E3 with basis the linear mapping A defined by the matrix where a is a real parameter 0. (1) 1. Calculate the proper values of A. 2. To reduce the matrix (1) to the triangular form (a matrix in which all elements situated on one side of the main diagonal are zeros), we recall that it is sufficient to construct two subspaces E1 and E2 of E3, respectively, of dimension 1 and 2 such that E1 E E2 and such that they reduce the operator A (i.e., x implies Ax E E1, etc.). Define a basis E2, E3 with respect to which the operator A is represented by a triangular matrix. Give this matrix explicitly. 3. Using the results of 2, integrate the linear differential system with x = at t = 0 where A is the matrix (1), x the vector with components relative to the basis (e1, C2, e3). 4. Indicate how one could solve the preceding question, without using the results of 2, by using the theorem on the general structure of the solu- tions of linear homogeneous differential systems with constant coefficients. (T.M.P., Lille, 1959) Solution 1. The characteristic polynomial of A is 1 a a -1 1 a) - 0 = (2 whose roots, that is, the proper values of A, are = 2, = 1 (double root). 2. A one-dimensional space E1 reducing A (such that if x E E1, Ax E E1) is a proper vector. Let us take for E1, the basis vector of E1, the proper</p><p>II. PROPER VALUES AND PROPER VECTORS REDUCTION OF MATRICES 31 vector corresponding to 7. = 2, the components are solutions of the homo- geneous system We take for E1 the vector (0,1, - 1) and seek E2 E1 such that Ax E E2 if x E Let E2 be a vector of E independent of E1. We must have Obviously, a solution is obtained by taking for E2 the proper vector corre- sponding to = 1 then E2. In the basis E1, E2 and an arbitrary third basis vector E3, the matrix has a triangular form. We can give it a particularly simple form by taking for E3 a vector such that (This vector exists, ? = 1 being a double proper value of proper vector E2 and the rank of the matrix A - I being 3 - I = 2 if a 0.) Thus, = X1 - X3) The matrix A takes the form If a = 0, then to = 1 corresponds a two-dimensional space of indepen- dent proper vectors. Let us take and the matrix A in the basis E1, E2, E3 takes the diagonal form 3. The differential system dx/dt = Ax has for general solution: (a) If a 0 X = with (condition at t = 0)</p><p>32 LINEAR ALGEBRA AND ANALYSIS and from which r = = 1 a= - (b) If a =0 X = one still has where, in view of the components of the vectors E1, E2, E3, B=1 and so B = 1, r=2. 20. Natural modes of vibration Three equal masses M1 = M2 = M3 are bound together and to a fixed support by springs having the same constant k. The displacements Y2, of the masses from their equilibrium positions are, in the absence of external forces, governed by the system of differential equations M1 M3 FIG. 3. - -</p><p>PROPER VALUES AND PROPER VECTORS REDUCTION OF MATRICES 33 Obtain within 0.1 the numbers such that the system admits a solu- tion of the following form. (First find the smallest one, corresponding to the fundamental mode of vibration sin + (wt + a) where X1, X2, X3 and a are constants as well as (natural modes of vibration). Solution By inserting in the differential system the expressions given for y, one sees that the vector X with components X1, X2, X3 must satisfy the linear and homogeneous system AX where A is the matrix = 1 and where we have set If X is not to be the null vector, it is necessary that 2 be a proper value of the matrix A: The fundamental mode of vibration of the system, W1, corresponds to the smallest proper value of A: it is the inverse of the greatest proper value of A-1; we shall obtain it by successive approximations. We have Let us calculate by iteration the proper vector corresponding to the greatest proper value (and this proper value) by taking as a first approxima- tion the vector We have = For X2 we take the vector proportional to Y1:</p><p>34 LINEAR ALGEBRA AND ANALYSIS and we find 2.33 = = 4.16 5.16 2.25 Y3 = = 4.05 5.05 The greatest proper value of is, within 0.1, found to be 5.0. The corresponding proper vector = (0.45, 0.80, 1). To calculate the other proper values and proper vectors we use the fact that since A (and hence also A-1 are symmetric, the proper vectors still to be obtained are orthogonal to the vector already found (0.45, 0.80, 1), thus their components satisfy the homogeneous equation 0.45x1 + Inserting the value of X3 obtained from this equation in the linear equations satisfied by the proper vectors of A-1, X1 + 2x2 + 3x3 = we find for the two first equations + 0.20x2 = + 0.10x1 + = The values of are the proper values of the two-dimensional matrix D . We find = 0.6 and = . Therefore, the values, within 0.1, of the numbers are 21. Functions of a matrix 1. Let A be a three-dimensional square matrix with real elements. Assume that A possesses three distinct real proper values and write the characteristic equation of A in the form P(2) = = 0 (ao, , A2 are real numbers).</p><p>II. PROPER VALUES AND PROPER VECTORS REDUCTION OF MATRICES 35 Show that A can be brought to the form where D is a diagonal matrix and S a regular one. Deduce from this that = = 0, where E is the three-dimensional unit matrix. 2. From here on, we assume that the proper values of A are and Obtain P and use it to express in the form of second-degree trinomials in A. Setting A = E show by recurrence that for every positive or zero integer n that Deduce from this the expression for in the form of a second-degree trinomial in A. 3. Consider the polynomials = = = ..., n. Express S,(A) and in the form of second-degree trinomials in A. Let the power series of t and sin t for t = and t = It appear in the coefficients of A and E and show that lim and lim Setting sin = lim and A = lim show that (cos = E. 4. Writing A in the form (see 1.) express the polynomials Sn(A), T.(A), introduced in 3., in terms of S and D. Deduce from this expressions for sin A and COS A. What information do we then obtain about the proper values of sin A and A? 5. If 6 A 4 verify that the proper values of A are and Calculate sin A and COS A using the expressions obtained in 3. Obtain the proper values of sin A and cos A: (a) directly (b) from the results of 4. Let be a matrix of dimension three with proper values and Define, as in 3, the functions sin B(x),</p><p>36 LINEAR ALGEBRA AND ANALYSIS Adapting to B(x) the calculations made for A in 2 and 3 express sin B(x) and B(x) in the form of second-degree trinomials in B(x). Deduce from them expressions for sin B(2) and cos B(2). Noting that one can take B(2) = 2A, obtain sin A and 2A and cos 2A. Show that sin 2A = 2 sin A A and cos 2A = 2(cos A)2 - E. Consider the expressions sin A cos B(2) + cos A sin B(2) and sin B(2) cos A + cos B(2) sin A, where the matrix B(2) is not necessarily the matrix 2A. Find the condition for these two expressions to be equal. (T.M.P., Rennes, 1959) Solution 1. See text. 2. P = P(A) = = 0. Hence and Let us assume that Using the expression for we verify that = is given by an analogous formula (replace n by n + 1); hence the required proof is obtained by recurrence: = 3. From this we deduce that Sn(A)</p><p>II. PROPER VALUES AND PROPER VECTORS REDUCTION OF MATRICES 37 We recognize the convergent series = 0 1. Hence sin A = lim In a similar manner, we obtain A = lim and verify by direct calculation that = E. 4. See text. If A = = for every polynomial Pn and every analytic function, sin A = S sin DS-1, , , the proper values of sin A are sin = 1 and = sin = 0, those of A are = 0 and = 1. 5. sin A The proper values, that is, the roots of = 0 are = 1 and in agreement with the results of 4. A similar verification holds for cos A. 6. The characteristic equation satisfied by B is obtained from the one satisfied by A by replacing x by P(B(x)) = It follows that sin and so sin B(2) = 0,</p><p>38 LINEAR ALGEBRA AND ANALYSIS In particular, if we take B(2) = 2A (matrix of proper values and we obtain sin 2A = 0, 2A We verify by taking into account the expressions of 3 that sin 2A = 2 sin A cos A and 2A = 2 (cos A)2 - 1. (This was to be expected since these identities are equivalent to identities between series which are still valid, modulo the equation P(A) = 0.) A suf- ficient condition for the given expressions to coincide is that the matrices A and B(2) commute; if they do, two arbitrary functions f(A) and g(B) of these matrices commute. Since the matrices A and B(2) can be diagonalized they commute if and only if they can be simultaneously diagonalized, i.e., if their proper directions are the same. This condition is also necessary; indeed, we have seen that sin B(2) is the null matrix. Hence we must have sin A cos B(2) = cos B(2) sin A. Sin A and COS B(2) must have the same proper vectors, and also A and B(2). 22. Necessary and sufficient condition for a matrix to have a complete system of proper vectors 1. Let A be a square matrix of dimension n. Show that there exists a unique polynomial Q(x) of minimum degree such that the coefficient of the term of highest degree is 1 and such that Q(A) = 0. Show that a necessary and sufficient condition for A to have a com- plete system of proper vectors is that the polynomial Q(x) possesses only simple roots. Solution 1. According to the Hamilton-Cayley theorem, there exists a polynomial of degree n (the characteristic polynomial) such that P(A) = 0. Let Q(A) be a polynomial of minimum degree with first term and such that Q(A) = 0; this polynomial is unique. For if a second Q'(A) were to exist, the difference Q(A) - Q'(A) would be of degree < and would satisfy Q(A) - Q'(A) = 0. 2. The proper values of A are roots of Q(x); indeed the proper values of Q(A) are the Q(A) = 0 implies = 0. Conversely, every root a of Q(x) = 0 is a proper value of A, for Q(a) = 0 implies: Q(x) = (x - a)R(x), degree R(x) < degree Q(x) and R(x) 0. So Q(A) = (A and (A - I)R(A)X = 0 for every X. The vector R(A)X (R(A) of degree less than Q(A) and nonzero) is a proper vector of A for the proper value a.</p>