Buscar

Elementary Vector and Tensor Analysis - Brannon

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 3, do total de 193 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 6, do total de 193 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 9, do total de 193 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Prévia do material em texto

Elementar
presented in
material mod
R. M. Brannon
University of New Mexic
Copyright is reserved.
Individual copies m
No part of this docu
Contact author at rm
UNM REP
September 10, 
y vector and tensor analysis
 a framework generalizable to higher-order applications in
eling
o, Albuquerque
ay be made for personal use.
ment may be reproduced for profit.
brann@sandia.gov
ORT
2002 11:22 am
NOTE: When u
document, the
with the page n
of th is documen
Note to draft r
the ones with
rather new and
It would really
whenever you
ing from this in
This work is a
document help
sing Adobe’s “acrobat reader” to view this
page numbers in acrobat wi l l not coincide
umbers shown at the bottom of each page
t.
eaders: The most useful textbooks are
fantastic indexes. The report’s index is
 sti l l under construction.
help if you all could send me a note
discover that an important entry is miss-
dex. I ’l l be sure to add it .
community effor t . Let’s try to make this
ful to others.
DR
A
FT
ELEMENTARY VECTOR AND TENSOR ANALYSIS
presented in a framework generalizable to higher-
order applications in material modeling
Rebecca M. Brannon†
†University of New Mexico Adjunct professor
rmbrann@sandia.gov
Abstract
Elementary vector and tensor analysis concepts are reviewed using a notation
that proves useful for higher-order tensor analysis of anisotropic media.
iiDR
A
FT
Acknowledgments
To be added. Stay tuned...
D
Acknowledgm
Preface............
Introduction ...
Terminology fr
Matrix Analysi
Definition of 
The matrix pr
The transpose
The inner pro
The outer pro
The trace of a
The Kronecke
The identity m
The 3D perm
The ε-δ (E-de
The ε-δ (E-de
Determinant o
Principal sub
Matrix invari
Positive defin
The cofactor-
Inverse.........
Eigenvalues a
Similarity
Finding eigen
Vector/tensor 
“Ordinary” en
Engineering “
Other choices
Basis expansi
Summation c
Don’t for
Indicial n
BEWARE
Reading inde
Aesthetic (co
Suspending th
Combining in
The index-ch
Summing the
The “under-ti
Simple vector 
Dot product b
Dot product b
RA
FT
Contents
ents ............................................................................................... ii
......................................................................................................... xiii
......................................................................................................... 1
om functional analysis ............................................................. 3
s....................................................................................................... 7
a matrix........................................................................................... 7
oduct............................................................................................... 8
 of a matrix..................................................................................... 9
duct of two column matrices .......................................................... 9
duct of two column matrices. ......................................................... 10
 square matrix................................................................................. 10
r delta............................................................................................. 10
atrix............................................................................................... 10
utation symbol ................................................................................ 11
lta) identity..................................................................................... 11
lta) identity with multiple summed indices ................................... 13
f a square matrix............................................................................ 14
-matrices and principal minors........................................................ 17
ants.................................................................................................. 17
ite ................................................................................................... 18
determinate connection .................................................................. 19
......................................................................................................... 19
nd eigenvectors .............................................................................. 20
 transformations ............................................................................. 22
vectors by using the adjugate......................................................... 23
notation ........................................................................................ 24
gineering vectors ........................................................................... 24
laboratory” base vectors ................................................................ 24
 for the base vectors ....................................................................... 25
on of a vector.................................................................................. 25
onvention — details........................................................................ 26
get to remember what repeated indices really mean ...................... 27
otation in derivatives ...................................................................... 29
: avoid implicit sums as independent variables.............................. 29
x STRUCTURE, not index SYMBOLS ......................................... 30
urteous) indexing ............................................................................ 31
e summation convention ............................................................... 31
dicial equations .............................................................................. 32
anging property of the Kronecker delta .......................................... 33
 Kronecker delta itself .................................................................... 34
lde” notation ................................................................................... 34
operations and properties ...................................................... 35
etween two vectors ........................................................................ 35
etween orthonormal base vectors................................................... 36
iii
D
Copyright is reserved. Ind
Finding the i-
Even and odd
Homogeneou
Vector orient
Simple scalar
Cross produc
Cross produc
Triple scalar 
Triple scalar 
Axial vectors
Glide pla
Projections .....
Orthogonal li
Rank-1 ortho
Rank-2 ortho
Basis interpre
Rank-2 obliq
Rank-1 obliq
Complementa
Normalized v
Expressing a 
orthonorm
Generalized p
Linear projec
Nonlinear pro
Self-adjoint p
The projectio
Tensors.............
Linear operat
Dyads and dy
Simpler “no-s
The matrix as
The sum of d
A sum of two
Scalar multip
The sum of fo
The dyad def
Expansion of
Tensor operati
Dotting a tens
The transpose
Dotting a tens
Dotting a tens
Extracting a p
Dotting a tens
Tensor analysi
RA
FT
th component of a vector................................................................ 36
 vector functions............................................................................. 37
s functions....................................................................................... 37
ation and sense................................................................................ 38
 components.................................................................................... 38
t ....................................................................................................... 39
t between orthonormal base vectors ............................................... 39
product ............................................................................................41
product between orthonormal RIGHT-HANDED base vectors ..... 41
........................................................................................................ 42
ne expressions ................................................................................. 43
......................................................................................................... 44
near projections .............................................................................. 44
gonal projections............................................................................. 46
gonal projections............................................................................. 47
tation of orthogonal projections..................................................... 47
ue linear projection ......................................................................... 48
ue linear projection ......................................................................... 49
ry projectors................................................................................... 49
ersions of the projectors ................................................................. 50
vector as a linear combination of three arbitrary (not necessarily
al) vectors...................................................................................... 51
rojections ....................................................................................... 53
tions ................................................................................................ 53
jections........................................................................................... 53
rojections........................................................................................ 54
n theorem........................................................................................ 55
......................................................................................................... 57
ors (transformations) ...................................................................... 58
adic multiplication ......................................................................... 61
ymbol” dyadic notation ................................................................. 62
sociated with a dyad....................................................................... 63
yads ................................................................................................. 64
 or three dyads is NOT (generally) reducible ............................... 64
lication of a dyad ............................................................................ 64
ur or more dyads is reducible!....................................................... 65
inition of a second-order tensor ...................................................... 66
 a second-order tensor in terms of basis dyads ............................... 66
ons.................................................................................................. 69
or from the right by a vector.......................................................... 69
 of a tensor ..................................................................................... 69
or from the left by a vector ............................................................ 70
or by vectors from both sides ........................................................ 71
articular tensor component ............................................................ 71
or into a tensor (tensor composition)............................................. 71
s primitives ................................................................................... 73
iv
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
D
Three kinds o
There exists a
Finding the te
The identity t
Tensor associ
The power of
The inverse o
The COFACT
Cofactor tens
Cramer’s rule
Inverse of a r
Derivative of
Projectors in te
Linear orthog
Finding a pro
Properties of 
Self-adjoint (
Generalized c
More Tensor p
Deviatoric ten
Orthogonal (u
Tensor associ
Physical appl
Symmetric an
Positive defin
Faster wa
Negative defi
Isotropic and 
Tensor operati
Second-order
Fourth-order 
Higher-order 
The magnitud
Useful inner p
Distinction be
Fourth-order 
Coordinate/ba
Change of ba
Tensor invarian
What’s the di
Example of a
Example of a
Example of a
The inertia TE
Scalar invarian
Invariants of 
RA
FT
f vector and tensor notation............................................................ 73
 unique tensor for each linear function .......................................... 76
nsor associated with a linear function............................................ 76
ensor ............................................................................................... 77
ated with composition of two linear transformations..................... 78
 heuristically consistent notation .................................................... 79
f a tensor......................................................................................... 80
OR tensor ...................................................................................... 80
or associated with a vector ............................................................. 82
 for the inverse ............................................................................... 82
ank-1 modification.......................................................................... 83
 a determinant ................................................................................. 83
nsor notation .............................................................................. 84
onal projectors expressed in terms of dyads .................................. 84
jection to a desired target space...................................................... 85
complementary projection tensors.................................................. 85
orthogonal) projectors..................................................................... 86
omplementary projectors ............................................................... 87
rimitives......................................................................................... 89
sors ................................................................................................ 89
nitary) tensors ................................................................................ 89
ated with the cross product............................................................. 91
ication of axial vectors.................................................................... 93
d skew-symmetric tensors ............................................................. 94
ite tensors ....................................................................................... 95
y to check for positive definiteness ................................................ 96
nite and semi-definite tensors......................................................... 97
deviatoric tensors ........................................................................... 97
ons.................................................................................................. 98
 tensor inner product....................................................................... 98
tensor inner product........................................................................ 99
tensor inner product ....................................................................... 99
e of a tensor or a vector ................................................................. 101
roduct identities............................................................................. 101
tween an Nth-order tensor and an Nth-rank tensor......................... 102
oblique tensor projections............................................................... 102
sis transformations ................................................................... 104
sis(coordinate transformation)....................................................... 104
ce.................................................................................................. 108
fference between a matrix and a tensor? ........................................ 108
 scalar “rule” that satisfies tensor invariance.................................. 111
 scalar “rule” that violates tensor invariance .................................. 111
 3x3 matrix that does not correspond to a tensor............................ 112
NSOR............................................................................................ 114
ts and spectral analysis.......................................................... 116
vectors or tensors ............................................................................ 116
v
D
Copyright is reserved. Ind
Primitive inv
Trace invaria
Characteristic
Direct no
The cofac
Invariants of 
CASE: in
The Cayley-H
CASE: Ex
Inverse of the
Eigenvalue pr
Algebraic and
Diagonalizab
Eigenprojecto
Geometrical e
Equation of a
Equation of a
Equation for 
Equation for 
Example.
Equation for 
Equation for 
Equation of a
Generalizatio
Polar decomp
The Q-R deco
The polar dec
The polar dec
The *FAST*
Material symm
Isotropic seco
Isotropic seco
Isotropic four
Transverse is
Abstract vecto
Definition of 
Inner product
Continuous fu
Tensors are v
Vector subspa
Subspaces an
Abstract cont
The contr
The swap
Vector/tensor 
ASIDE #1
RA
FT
ariants.............................................................................................. 117
nts.................................................................................................... 118
 invariants....................................................................................... 118
tation definitions of the characteristic invariants........................... 119
tor in the triple scalar product ....................................................... 120
a sum of two tensors ....................................................................... 121
variants of the sum of a tensor plus a dyad .................................... 121
amilton theorem: ........................................................................... 122
pressing the inverse in terms of powers and invariants................. 122
 sum of a tensor plus a dyad........................................................... 122
oblems............................................................................................ 122
 geometric multiplicity of eigenvalues .......................................... 123
le tensors ......................................................................................... 125
rs .................................................................................................... 126
ntities ............................................................................................ 128
 plane .............................................................................................. 128
 line ................................................................................................. 129
a sphere ........................................................................................... 130
an ellipsoid...................................................................................... 130
......................................................................................................... 131
a cylinder with an ellipse-cross-section .......................................... 132
a right circular cylinder................................................................... 132
 general quadric (including hyperboloid) ....................................... 132
n of the quadratic formula and “completing the square”................ 133
osition............................................................................................ 135
mposition....................................................................................... 135
omposition theorem: ...................................................................... 136
omposition is a nonlinear projection operation ............................. 138
 way to do a polar decomposition in two dimensions..................... 139
etry................................................................................................ 140
nd-order tensors in 3D space ......................................................... 140
nd-order tensors in 2D space ......................................................... 142
th-order tensors .............................................................................. 143
otropy.............................................................................................. 144
rs/tensor algebra ...................................................................... 146
an abstract vector............................................................................ 146
 spaces ............................................................................................ 147
nctions are vectors! ....................................................................... 147
ectors! ............................................................................................. 149
ces.................................................................................................. 150
d the projection theorem................................................................. 151
raction and swap (exchange) operators .......................................... 151
action tensor ................................................................................... 155
 tensor ............................................................................................. 155
calculus........................................................................................ 157
: “total” and “partial” derivative notation................................... 158
vi
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
D
ASIDE #2
The “nab
Okay, if th
Derivatives in
A more ap
Series expans
Closing remar
REFERENCES ...
INDEX ...............
RA
FT
: Right and left gradient operations............................................... 160
la” or “del” gradient operator ...................................................... 160
e above relation does not hold, does anything LIKE IT hold?...... 163
 reduced dimension spaces ............................................................ 164
plied example................................................................................. 166
ion of a nonlinear vector function .................................................. 167
ks .................................................................................................... 169
......................................................................................................... 171
......................................................................................................... 175
vii
D
Copyright is reserved. Ind
RA
FT
viii
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
D
Copyright is reserved. Ind
Figure 5.1. Fi
Figure 5.2. Cr
Figure 6.1. Ve
Figure 6.2. (a
Figure 6.3. Ob
Figure 6.4. Ra
Figure 6.5. Pr
Figure 6.6. Th
Figure 6.7. Ob
Figure 17.1. Vi
RA
FT
Figures
nding components via projections. ................................................. 38
oss product ..................................................................................... 39
ctor decomposition ........................................................................ 45
) Rank-1 orthogonal projection, and (b) Rank-2 orthogonal projection.47
lique projection. ............................................................................ 48
nk-1 oblique projection..................................................................49
ojections of two vectors along a an obliquely oriented line. .......... 51
ree oblique projections. ................................................................. 52
lique projection. ............................................................................ 55
sualization of the polar decomposition. ......................................... 137
ix
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
D
Copyright is reserved. Individ
RA
FT
x
ual copies may be made for personal use. No part of this document may be reproduced for profit.
D
Copyright is reserved. Individ
RA
FT
Tables
xi
ual copies may be made for personal use. No part of this document may be reproduced for profit.
D
Copyright is reserved. Individ
RA
FT
xii
ual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 11:51 am
Preface D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
Math and s
making it virtu
cated concepts
arly journals a
explanations o
results are no
readers who ar
reading the lite
is good for exp
study, it can b
whose expertis
senting severa
for the analysis
nilly througho
books. Most of
functional anal
cation to specifi
to use notation
report presents
The range of ap
example, descr
requires the us
ment. Likewise
functions are “w
For example, th
assuming that
tion is differen
mathematical
engineering de
While I hope t
personal motiv
can point from 
n o n
Preface
cience journals often have extremely restrictive page limits,
ally impossible to present a coherent development of compli-
by working upward from basic concepts. Furthermore, schol-
re intended for the presentation of new results, so detailed
f known results are generally frowned upon (even if those
t well-known or well-understood). Consequently, only those
e already well-versed in a subject have any hope of effectively
rature to further expand their knowledge. While this situation
erienced researchers and specialists in a particular field of
e a frustrating handicap for less experienced people or people
e lies elsewhere. This report serves these individuals by pre-
l known theorems or mathematical techniques that are useful
material behavior. Most of these theorems are scattered willy-
ut the literature. Several rarely appear in elementary text-
the results in this report can be found in advanced textbooks on
ysis, but these books tend to be overly generalized, so the appli-
c problems is unclear. Advanced mathematics books also tend
that might be unfamiliar to the typical research engineer. This
derivations of theorems only where they help clarify concepts.
plicability of theorems is also omitted in certain situations. For
ibing the applicability range of a Taylor series expansion
e of complex variables, which is beyond the scope of this docu-
, unless otherwise stated, I will always implicitly presume that
ell-behaved” enough to permit whatever operations I perform.
e act of writing will implicitly tell the reader that I am
can be written as a function of and (furthermore) this func-
tiable. In the sense that much of the usual (but distracting)
provisos are missing, I consider this document to be a work of
spite the fact that it is concerned principally with mathematics.
his report will be useful to a broader audience of readers, my
ation is to establish a single bibliographic reference to which I
my more stilted and terse journal publications.
Rebecca Brannon, rmbrann@sandia.gov
Sandia National Laboratories
September 10, 2002 11:51 am.
df dx⁄
f x
xiii
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 11:51 am
PrefaceD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
xiv
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Introduction D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
ELEMENT
presented
orde
The
Pleas
This report
notation that p
dimensions. Te
important to a
and overall app
sophisticated a
Many of the
ous — instead
More careful ex
on matrix, vect
analysis text to
is presumed to
the rudimenta
courses.
“Things
b
n o n
ARY VECTOR AND TENSOR ANALYSIS
 in a framework generalizable to higher-
r applications in material modeling
 discussion of tensor calculus will be
expanded at a later date
(my schedule is really packed!).
e report errors to rmbrann@me.unm.edu
1. Introduction
reviews tensor algebra (and a bit of tensor calculus) using a
roves very useful when extending these basic ideas to higher
nsor notation unfortunately remains non-standardized, so it’s
t least scan this report to become familiar with our definitions
roach to the field if you wish to move along to our other (more
nd contemporary) applications in materials modeling.
definitions and theorems provided in this report are not rigor-
they are presented in a more physical engineering manner.
positions on these topics can be found in elementary textbooks
or, and tensor analysis. One may need to look to a functional
find an equally developed discussion of projections. The reader
have been exposed to vector analysis and matrix analysis at
ry level that is ordinarily covered in undergraduate calculus
 should be described as simply as possible,
ut no simpler.” — A. Einstein
1
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
IntroductionD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
We present
ence in which a
consistent with
published sepa
higher-dimensi
from 3D. For
project a vector
that the act of s
perfectly analo
Throughout
dimensional ph
The term “abs
higher dimens
analysis. Excep
and tensors in
handed). Thus,
rectangular Ca
find a discus
~rmbrann/gobag
already familia
report).
the information in this report in order to have a single refer-
ll the concepts are presented together using a notation that is
that used in more advanced follow-up work that we intend to
rately. Some of our other work explains that many theorems in
onal realms have perfect analogs with the ordinary concepts
example, this elementary report discusses how to obliquely
onto plane, and we demonstrate in later (more advanced) work
olving viscoplasticity models by a return mapping algorithm is
gous to vector projection.
this report, we use the term “ordinary” to refer to the three
ysical space in which everyday engineering problems occur.
tract” will be used later when extending ordinary concepts to
ional spaces, which is the principal goal of generalized tensor
t where otherwise stated, the basis used for vectors
this report will be assumed regular (i.e., orthonormal and right-
all indicial formulas in this report use what Malvern [12] calls
rtesian components. Readers interested in irregular bases can
sion of curvilinear coordinates at http://www.me.unm.edu/
.html (however, that report presumes that the reader is
r with the notation and basic identities that are covered in this
e
˜
1 e
˜
2 e
˜
3, ,{ }
2
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Terminology from functional analysis D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
2. Te
Vector, tens
study called fu
several overly-
most convenie
deals with oper
be regarded as
if the result of
transformation
In this repo
using more fa
straight underl
other abstract
when discussin
(scalar, vector,
When discussi
des”, and the to
* At this point, the
less the “order” 
apply to scalars 
apply in more g
“C
n o n
rminologyfrom functional analysis
or, and matrix analysis are subsets of a more general area of
nctional analysis. One purpose of this report is to specialize
general results from functional analysis into forms that are the
nt for “real world” physics applications. Functional analysis
ators and their properties. For our purposes, an operator may
a function . If the argument of the function is a vector and
the function is also vector, then the function is usually called a
 because it transforms one vector to become a new vector.
rt, any non-underlined quantity is just an ordinary number (or,
ncy jargon, scalar or field member). Quantities with single
ines (e.g., or ) might represent scalars, vectors, tensors, or
objects. We follow this convention throughout the text; namely,
g a concept that applies equally well to a tensor of any order
second-order tensor), then we will use straight underlines.*
ng “objects” of a particular order, then we will use “under-til-
tal number of under-tildes will equal the order of the object.
 reader is not expected to already know what is meant by the term “tensor,” much
of a tensor or the meaning of “inner product.” For now, consider this section to
and vectors. Just understand that the concepts reviewed in this section will also
eneral tensor settings, once learned.
hange isn’t painful, but resistance to
change is.” — anonymous(?)
f x( )
x y
3
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Terminology from functional analysisD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
Some basic
below. More m
readily found i
the following li
(scalars, vector
cation and “ob
“ ˙ ” multiplicat
ments are just
depending on t
are vectors; it’s
arguments are
• A transfor
, and .
• A transfor
• A transfor
“idempote
keep on re
• Any opera
 is we
must be in
example, i
domain is
scalars, ve
stated.
• The “codo
.
nonnegati
range spa
• A set S is s
applicatio
is itself a m
closed und
is itself a s
not closed
matrices i
• The null s
* This follows bec
† Matrices are defi
x y
f x( )
y f x( )=
terminology from functional analysis is defined very loosely
athematically correct definitions will be given later, or can be
n the literature [e.g., Refs 31, 26, 27, 28, 29, 11]. Throughout
st, the analyst is presumed to be dealing with a set of “objects”
s, or perhaps something more exotic) for which scalar multipli-
ject” addition have well-understood meanings. The single dot
ion symbol represents ordinary multiplication when the argu-
scalars. Otherwise, it represents the appropriate inner product
he arguments (e.g., it’s the vector dot product if the arguments
the tensor “double dot” product — defined later — when the
 tensors).
mation is “linear” if for all , ,
mation is “self-adjoint” if .
mation is a projector if . The term
nt” is also frequently used. A projector is a function that will
turning the same result if it is applied more than once.
tor must have a domain of admissible values of for which
ll-defined. Throughout this report, the domain of a function
ferred by the reader so that the function “makes sense.” For
f , then the reader is expected to infer that the
 the set of nonzero . Furthermore, throughout this report, all
ctors and tensors are assumed to be real unless otherwise
main” of an operator is the set of all values such that
 For example, if , then the codomain is the set of
ve numbers,* whereas the range is the set of reals. The term
ce will often be used to refer to the range of a linear operator.
aid to be “closed” under a some particular operation if
n of that operation to a member of S always gives a result that
ember of S. For example, the set of all symmetric matrices† is
er matrix addition because the sum of two symmetric matrices
ymmetric matrix. By contrast, set of all orthogonal matrices is
 under matrix addition because the sum of two orthogonal
s not generally itself an orthogonal matrix.
pace of an operator is the set of all for which .
ause we have already stated that is to be presumed real.
ned in the next section.
f f αx βy+( ) αf x( ) βf y( )+= α β
f y f x( )⋅ x f y( )⋅=
f f f x( )( ) f x( )=
f x
f x( ) 1 x⁄=
x
y
f x( ) x2=
x
x f x( ) 0=
4
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Terminology from functional analysis D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
• For each i
.
possible v
situation a
 is obtain
two values
• If a functio
such that
• A “linear c
expressed
“linear com
be express
makes sen
objects rep
multiplica
 matr
would be d
a ma
• A set of “o
written as
example, t
can be exp
• The span 
vectors th
collection.
is the set o
This set of
• The dimen
“numbers”
member o
those num
vectors ref
dimension
componen
the three 
y f x( )=
f
f x( ) x2=
1 2×
1 2×
1 2,[ ] 3,[,{
5 6,[ ] –(=
n o n
nput , a well-defined operator must give a unique output
In other words, a single must never correspond to two or more
alues of . The operator is called one-to-one if the reverse
lso holds. Namely, is one-to-one if each in the codomain of
ed by a unique such that . For example, the function
is not one-to-one because a single value of can be obtained by
 of (e.g., can be obtained by or ).
n is one-to-one, then it is invertible. The inverse is defined
.
ombination” of two objects and is any object that can be
 in the form for some choice of scalars and . A
bination” of three objects ( , , and ) is any object that can
ed in the form . Of course, this definition
se only if you have an unambiguous understanding of what the
resent. Moreover, you must have a definition for scalar
tion and addition of the objects. If, for example, the “objects” are
ices, then scalar multiplication of some matrix
efined and the linear combination would be
trix given by .
bjects” is linearly independent if no member of the set can be
 a linear combination of the other members of the set. If, for
he “objects” are matrices, then the three-member set
is not linearly independent because the third matrix
ressed as a linear combination of the first two matrices; namely,
.
of a collection of linearly independent vectors is the set of all
at can be written as a linear combination of the vectors in the
For example, the span of the two vectors and
f all vectors expressible in the form .
 vectors represents any vector for which .
sion of a set or space equals the minimum number of
 that you would have to specify in order to uniquely identify a
f that set minus the number of independent constraints that
bers must satisfy. For example, the set of ordinary engineering
erenced to a commonly agreed upon “laboratory basis” in three
s is three dimensional because each vector has three
ts. However, the set of unit vectors is two-dimensional because
components of a unit vector must satisfy the one constraint,
x f
x
y
f y
x y= f x( )
y
x y=4 x=2 x= 2–
f 1–
x= f 1– y( )
x y r
r αx βy+= α β
x y z r
r αx βy γ z+ +=
αx x1 x2,[ ]
αx1 αx2,[ ] αx βy+
αx1 βy1+ αx2 βy2+,[ ]
1 2×
4] 5 6,[ ], }
1) 1 2,[ ] 2( ) 3 4,[ ]+
1 1 0, ,{ } 1 1– 0, ,{ }
α1 1 1 0, ,{ } α2 1 1– 0, ,{ }+
x1 x2 x3, ,{ } x3=0
n
˜
5
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Terminology from functional analysisD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
• If a set is 
every line
set), then
the set is c
because a 
vector. Lin
though the
of more th
dimension
manifold.
• Zero must
a great pla
space. For
linear spa• A plane th
We know 
vector. Th
set that w
single poin
line define
variables
described
does descr
member
a vector
properties
know abou
• Given an n
basis if ev
combinati
many mem
• A “binary”
argument
n12 n22 n+ +
O 0 0,( )=
x
x
.
closed under vector addition and scalar multiplication (i.e., if
ar combination of set members gives a result that is also in the
the set is called a linear manifold, or a linear space. Otherwise,
urvilinear. The set of all unit vectors is a curvilinear space
linear combination of unit vectors does not result in a unit
ear manifolds are like planes that pass through the origin,
y might be “hyperplanes,” which is just a fancy word for a plane
an just two dimensions. Linear spaces can also be one-
al. Any straight line that passes through the origin is a linear
always be a member of a linear manifold, and this fact is often
ce to start when considering whether or not a set is a linear
 example, we could assert that the set of unit vectors is not a
ce by simply noting that the zero vector is not a unit vector.
at does not pass through the origin must not be a linear space.
this simply because such a plane does not contain the zero
is kind of plane is called an “affine” space. An “affine” space is a
ould become a linear space if the origin were to be moved to any
t in the set. For example, the point lies on the straight
d by the equation, . If we move the origin from
 to a new location , and introduce a change of
and , then the equation for this same line
with respect to this new origin would become , which
ibe a linear space. Stated differently, a set is affine if every
in that set is expressible in the form of a constant vector plus
 that does belong to a linear space. Thus, learning about the
 of linear spaces is sufficient to learn most of what you need to
t affine spaces.
-dimensional linear space, a subset of members of that space is
ery member of the space can be expressed as a linear
on of members of the subset. A basis always contains exactly as
bers as the dimension of the space.
 operation is simply a function or transformation that has two
s. For example, is a binary operation.
3
2 1=
0 b,( )
y mx b+=
O * 0 b,( )=
x* x 0–= y* y b–=
y* mx*=
S
d
*
f x y,( ) x2 y=
6
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix Analysis D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
Tensor ana
tensor analysis
only the follow
Definition o
A matrix is
form of a “table
or ) happens
will often use t
physical sense.
matrices in pla
For matrice
namely, if
For matrices of
then
If attention
shown as subs
row and col
* Among the refer
ing, listed in ord
Refs. 23, 9, Fin
“There
of e
M
N=3
v{ }
v
v
v
=
<v> [=
jth
n o n
3. Matrix Analysis
lysis is neither a subset nor a superset of matrix analysis —
complements matrix analysis. For the purpose of this report,
ing concepts are required from matrix analysis:*
f a matrix
an ordered array of numbers that are typically arranged in the
” having rows and columns. If one of the dimensions (
to equal 1, then the term “vector” is often used, although we
he term “array” in order to avoid confusion with vectors in the
A matrix is called “square” if . We will usually typeset
in text with brackets such as .
s of dimension , we may also use braces, as in ;
, then
 (3.1)
dimension , we use angled brackets ; Thus, if ,
 (3.2)
must be called to the dimensions of a matrix, then they will be
cripts, for example, . The number residing in the
umn of will be denoted .
ences listed in our bibliography, we recommend the following for additional read-
er from easiest (therefore less rigorous) to most abstract (and therefore complete):
ish this list...
 are a thousand hacking at the branches
vil to one who is striking at the root.”
— Henry Thoreau
N M N
M=N
A[ ]
N 1× v{ }
1
2
3
1 M× <v> N=3
v1 v2 v3, , ]
A[ ]M N× ith
A[ ] Aij
7
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix AnalysisD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
In this repo
(for example,
respect to som
depending on w
matrix, respect
be denoted in b
tensors are oft
will denote by p
). As was th
erenced to som
will not chang
These commen
The matrix p
The matrix
written
Explicitly show
Note that the
hand side, and
ting” position (
sion of )
The matrix 
where 
and ta
The summation
As a special
Suppose that
must be an arr
T
˜
˜
[ ]
C[ ] A[=
C[ ]
M N×
B[ ]
Cij
k =
R
∑=
i
j
{
u{ } [=
ui
k =
N
∑=
rt, vectors will be typeset in bold with one single “under-tilde”
) and the associated three components of the vector with
e implicitly understood basis will be denoted or ,
hether those components are collected into a column or row
ively. Similarly, second-order tensors (to be defined later) will
old with two under-tildes (for example ), and we will find that
en described in terms of an associated matrix, which we
lacing square brackets around the tensor symbol (for example,
e case with vectors, the matrix of components is presumed ref-
e mutually understood underlying basis — changing the basis
e the tensor , but it will change its associated matrix .
ts will make more sense later.
roduct
product of times is a new matrix
 (3.3)
ing the dimensions,
 (3.4)
dimension must be common to both matrices on the right-
this common dimension must reside at the “inside” or “abut-
the trailing dimension of must equal the leading dimen-
product operation is defined
,
takes values from 1 to ,
kes values from 1 to . (3.5)
 over ranges from 1 to the common dimension, .
case, suppose that is a square matrix of dimension .
 is an array (i.e., column matrix) of dimension . Then
 (3.6)
ay of dimension with components given by
, where takes values from 1 to (3.7)
v
˜ v
˜
{ } <v
˜
>
T
˜
˜3 3×
T
˜
˜
T
˜
˜
[ ]
A[ ]M R× B[ ]R N× C[ ]M N×
] B[ ]
A[ ]
M R×
B[ ]
R N×
=
R
A[ ]
AikBkj
1
M
N
k R
F[ ] N N×
v} N 1×
F ] v{ }
N 1×
Fikvk
1
i N
8
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix Analysis D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
The transpo
The transp
reversed dimen
where 
and ta
The transpose
component of
where 
and ta
The dimension
ple, if is an
The transpose
ple,
The inner p
The inner p
having the sam
Applying the d
(which is just a
If and
product gives t
Bij A=
i
j
[
AijT A=
i
j
v{ }
v{ }T =
A[ ] B[ ](
<v> A[ ](
v{ }T w{
vkwk
k 1=
N
∑
v{ } w{
n o n
se of a matrix
ose of a matrix is a new matrix (note the
sions). The components of the transpose are
takes values from 1 to ,
kes values from 1 to . (3.8)
of is written as , and the notation means the
. Thus, the above equation may be written
takes values from 1 to ,
kes values from 1 to . (3.9)
s of and are reverses of each other. Thus, for exam-
 matrix, then is a matrix. In other words,
and (3.10)
of a product is the reverse product of the transposes. For exam-
, and
 (3.11)
roduct of two column matrices
roduct of two column matrices, and , each
e dimension is defined
, or, using the angled-bracket notation, (3.12)
efinition of matrix multiplication, the result is a matrix
 single number) given by
 (3.13)
contain component of two vectors and then the inner
he same result as the vector “dot” product ,defined later.
A[ ]
M N×
B[ ]
N M×
ji
N
M
A[ ] A[ ]T AijT ij
A]T
ji
N
M
A[ ] A[ ]T
N 1× v{ }T 1 N×
<v> <v>T v{ }=
)T B[ ]T A[ ]T=
)T A[ ]T<v>T A[ ]T v{ }= =
v{ }
N 1×
w{ }
N 1×
} <v> w{ }
1 1×
} v
˜
w
˜
v
˜
w
˜
•
9
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix AnalysisD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
The outer p
The outer p
essarily of the 
For this case, t
the summation
The result o
given by .
then the outer
 (also often 
The trace of
A matrix
columns. The t
ponents:
The trace opera
The Kronec
The so-calle
values of the su
The identity
The identity
diagonal. For e
The compon
a{ } b{ }T
aib j
a
˜
b
˜
A[
tr A[ ] =
tr A[ ]T( )
tr A[ ] B[(
δij
1
0

=
I[ ]
1
0
0
=
ij
I[ ] v{ } =
roduct of two column matrices.
roduct of two column matrices, and , not nec-
same dimension is defined
, or, using the angled-bracket notation, (3.14)
he value of the “adjacent” dimension in Eq. (3.5) is just 1, so
ranges from 1 to 1 (which means that it is just a solitary term).
f the outer product is an matrix, whose component is
If and contain components of two vectors and
product gives the matrix corresponding to the “dyadic” product
denoted ), to be discussed in gory detail later.
 a square matrix
is called “square” because it has as many rows as it has
race of a square matrix is simply the sum of the diagonal com-
 (3.15)
tion satisfies the following properties:
 (3.16)
(cyclic property) (3.17)
ker delta
d Kronecker delta is a symbol that is defined for different
bscripts and . Specifically,
 (3.18)
 matrix
matrix, denoted , has all zero components except 1 on the
xample, the identity is
 (3.19)
ent of the identity is given by . Note that, for any array
 (3.20)
a{ }
M 1×
b{ }
N 1×
a{ }<b>
R
M N× ij
a{ } b{ } a
˜
b
˜
a
˜
b
˜
⊗
]N N×
Akk
k 1=
N
∑
tr A[ ]=
]) tr B[ ] A[ ]( )=
δij
i j
if i= j
if i j≠
I[ ]
3 3×
0 0
1 0
0 1
δij v{ }
v{ }
10
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix Analysis D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
The 3D perm
The 3D per
Levi-Civita den
Note that the i
of the result. F
the value. Thus
The term “3D”
of which take o
The ε-δ (E-d
If the altern
exactly one ind
result, called th
Here, we have
summed, while
ues from 1 to 3
tion” which sta
in a term shou
Index symbols
taking values f
terms. Using th
* Though not need
+1 if , and
the four indices 
permutation. A p
it back to
time. A cyclic pe
unchanged if 
sign, whereas cy
εijk



=
εijk ε j=
ij=12
1234
n
εijnε
n 1=
3
∑
εijnεkln
n o n
utation symbol
mutation symbol (also known as the alternating symbol or the
sity) is defined
 (3.21)
ndices may be permuted cyclically without changing the value
urthermore, inverting any two indices will change the sign of
, the permutation symbol has the following properties:
 (3.22)
is used to indicate that there are three subscripts on each
n values from 1 to 3.*
elta) identity
ating symbol is multiplied by another alternating symbol with
ex being summed, a very famous and extraordinarily useful
e ε-δ identity, applies. Namely,
. (3.23)
highlighted the index “n” in red to emphasize that it is
the other indices (i, j, k, and l) are “free” indices taking on val-
. Later on, we are going to introduce the “summation conven-
tes that expressions having one index appearing exactly twice
ld be understood summed over from 1 to 3 over that index.
that appear exactly once in one term are called “free indices,”
rom 1 to 3, and they must appear exactly once in all of the other
is convention, the above equation can be written as
. (3.24)
ed for our purposes, the 2D permutation symbol is defined to equal zero if ,
if . The 4D permutation symbol is defined to equal zero if any of
are equal; it is +1 if is an even permutation of and if is an odd
ermutation is simply a rearrangement. The permutation is even if rearranging
can be accomplished by an even number of moves that exchange two elements at a
rmutation of an n-D permutation symbol will change sign if is even, but remain
is odd. Thus, for our 3D permutation symbol, cyclic permutations don’t change
clic permutations of the 4D permutation symbol will change sign.
1 if ijk 123 231 or 312, ,=
1– if ijk 321 132 or 213, ,=
0 otherwise
1
2 3
+1 –1
1
2 3
ki εkij ε jik– εikj– εkji–= = = =
εijk
εij i= j
1– ij 21= εijkl
ijkl 1234 1– ijkl
ijkl
n
kln δikδ jl δilδ jk–=
δikδ jl δilδ jk–=
11
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix AnalysisD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
Because of t
applies whenev
For example,
because
expression
is also required
the positive of t
To make an
laboriously ap
summed index
whether or not
right hand sid
careless mistak
tion index has
symbol, most p
ing out where t
inside” rule. By
are the in
second δ, then
free indices go
way! By thinki
avoid both the
and the slow le
to apply the ε-δ
the identity as 
Our goal is to
marks with th
look at the left
the index . N
“cyclically mov
bol. Each alter
moving cyclica
right of the su
sary. For exam
“pqr” are “qr”;
indices forward
ton of Eq. (3.25
are “mk” where
nating symbol
εnij =
εinj
εkln
εimkεpin
i
he cyclic properties of the permutation symbol, the ε-δ identity
er any index on the first ε matches any index on the second ε.
the above equation would apply to the expression
. The negative of the ε-δ identity would also apply to the
because . Of course, if a negative permutation
to place the summation index at the end of the second ε, then
he ε-δ identity would again apply.
expression fit the index structure of Eq. (3.23), most people
ply the cyclic property to each alternating symbol until the
is located at the trailing side on both of them. Keeping track of
these manipulations will require changing the final sign of the
e of the ε-δ identity is one of the most common and avoidable
es made when people use this identity. Even once the summa-
been properly positioned at the trailing end of each alternating
eople then apply a slow (and again error-prone process of figur-
he free indices go). Typically people apply a “left-right/outside-
this, we mean that the free indices on the left sides of and
dices that go on the first δ, then the right free indices go on the
the outer free indices go on the third δ, and (finally) the inner
on the last δ. The good news is... you don’t have to do it this
ng about the ε-δ identity in a completely different way, you can
initial rearrangement of the indices on the alternating symbols
ft-right-out-in placement of the indices. Let’s suppose you want
identity to the expression . First write a “skeleton” of
follows
 (3.25)
find a rapid and error-minimizing way to fill in the question
e correct index symbols. Once you have written the skeleton,
-hand side to identify which index is summed. In this case, it is
ext say out loud the four free indices in an order defined by
ing forward from the summed index” on each alternating sym-
nating symbol has two free indices. To call out their names by
lly forward, you simply say the name of the two indices to the
mmed index, wrapping back around to the beginning if neces-
ple, the two indices cyclically forward from “p” in the sequence
the two indices cyclically forwardfrom “q” are “rp”; the two
from “r” are “pq”. For the first alternating symbol in our skele-
), the two indices cyclically forward from the summed index i
as the two indices cyclically forward from i in the second alter-
are “np”. You can identify these pairs quickly without ever hav-
εnijεkln
εijn
εkln εinj εijn–=
εijn
εimkεpin
δ??δ?? δ??δ??–=
12
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix Analysis D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
ing to rearrang
to obtain a seq
these four indi
should write th
You write the l
reverse order (4
Thus, for exam
you would first
Then you just
first in that ord
obtain the fina
This may seem
left-right-out-in
Give it a try u
dream of going
The ε-δ (E-d
Recall that 
What happens
plied side-by-si
to throwing a
we add up only
Note that we s
The second term
 or it will e
δ1?δ2? δ–
δ13δ24 δ–
δm?δk? –
εimkεpin
εijnε
n 1=
3
∑
εij
n 1=
3
∑
j 1=
3
∑
i k≠
n o n
e anything, and you can (in your head) group the pairs together
uence of four free indices “mknp”. The final step is to write
ces onto the skeleton. If the indices are ordered 1234, then you
e first two indices (first and second) on the skeleton like this
 (3.26)
ast pair (third and fourth) in order (34) on the first term and in
3) on the last term:
 (3.27)
ple, to place the free indices “mknp” onto the Kronecker deltas,
 take care of the “mk” by writing
 (3.28)
finish off with the last two “np” free indices by writing them
er on the first term and in reverse order on the second term to
l result:
. (3.29)
a bit strange at first (especially if you are already stuck in the
mind set), but this method is far quicker and less error-prone.
ntil you become comfortable with it, and you probably won’t
 back to your old way.
elta) identity with multiple summed indices
the identity is given by
. (3.30)
if we now consider the case of two alternating symbols multi-
de with two indices being summed? This question is equivalent
summation around the above equation in such a manner that
 those terms for which . Then
= –
=
= (3.31)
implified the first term by noting that .
was simplified by noting that will be zero if
qual if . Thus, it must be simply .
1?δ2?
14δ23
δm?δk?
δmnδkp δmpδkn–=
ε-δ
kln δikδ jl δilδ jk–=
j=l
nεkjn δikδ jj δijδ jk–( )
j 1=
3
∑=
δik δ11 δ22 δ33+ +( ) δi1δ1k δi2δ2k δi3δ3k+ +( )
3δik δik–
2δik
δ11 δ22 δ33+ + 1 1 1+ + 3= =
δi1δ1k δi2δ2k δi3δ3k+ +
1 i=k δik
13
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix AnalysisD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
Using simil
to setting
result is six. To
Determinan
The simples
recursively. In
can be alternat
bol of Eq. (3.21
A mat
defined to equa
The determ
The determina
≡
–
Note that we h
factor are 123.
permutations o
second indices
213. This relat
symbol fro
i=k
εijnεkjn
εijkεijk
1 1×
det A11[ ]
det
A11
A21
det
A11
A21
A31
A(
A(
εijk
ar logic, the identity with all indices summed is equivalent
in the above equation, summing over each instance so that the
 summarize using the summation conventions,
 (3.32)
 (3.33)
t of a square matrix
t way to explain what is meant by a determinant is to define it
this section, we show how the determinant of a matrix
ively defined by using the three-dimensional permutation sym-
).
rix is just a single number. The determinant of a matrix is
l its solitary component. Thus,
 (3.34)
inant of a matrix is defined by
 (3.35)
nt of a matrix is defined by
 (3.36)
ave arranged this formula such that the first indices in each
For the positive terms, the second indices are all the positive
f 123. Namely: 123, 231, and 312. For the negative terms, the
are all the negative permutations of 123. Namely: 321, 132, and
ionship may be written compactly by using the permutation
m Eq. (3.21). Namely, if is a matrix, then
ε-δ
2δik=
6=
3 3×
1 1×
A11≡
2 2×
A12
A22
A11 A22 A12 A21–≡
3 3×
A12 A13
A22 A23
A32 A33
11 A22 A33 A12 A23 A31 A13 A21 A32+ + )
13 A22 A31 A11 A23 A32 A12 A21 A33+ + )
A[ ] 3 3×
14
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix Analysis D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
This definition
sion by using th
Alternatively, f
nant can be defi
where is a fr
for will give
, and it is d
Here is t
umn of . T
. By virtue
“signed minor.
calculations be
shows up in th
The index
with several ze
must be compu
defined in term
be expressed in
determinant is
determinant is
to compute the
Alternatively c
* Specifically, for
ate the determin
where e is the ba
million years to
tion methods [__
det A[ ]
det A[ ]N
i
i
Aij
AijC –(=
Mij[ ]
A[ ]
Aij
i
det
A11 A1
A21 A2
A31 A3
n o n
 (3.37)
can be extended to square matrices of arbitrarily large dimen-
e n-dimensional permutation symbol (see footnote on page 11).
or square matrices of arbitrarily large dimension, the determi-
ned recursively as
(no summation on index ) (3.38)
ee index taking any convenient value from 1 to (any choice
the same result). The quantity is called the “cofactor” of
efined by
 (3.39)
he submatrix obtained by striking out the row and col-
he determinant of is called the “minor” associated with
of the , the cofactor component is often called the
” The formula in Eq. (3.38) is almost never used in numerical
cause it requires too many multiplications,* but it frequently
eoretical analyses.
in Eq. (3.39) may be chosen for convenience (usually a row
ros is chosen to minimize the number of sub-determinants that
ted). The above definition is recursive because is
s of smaller determinants, which may in turn
terms of determinants, and so on until the
expressed in terms of only determinants, for which the
defined in Eq. (3.34). As an example, consider using Eq. (3.38)
 determinant of a matrix. Choosing , Eq. (3.38) gives
, (3.40)
hoosing , Eq. (3.38) gives
 large values of the dimension , the number of multiplications required to evalu-
ant using Crammer’s rule (as Eq. 3.38 is sometimes called) approaches ,
se of the natural logarithm. An ordinary personal computer would require a few
compute a determinant using Cramer’s rule! Far more efficient decomposi-
] can be used to compute determinants of large matrices.
εijk A1i A2 j A3k
k 1=
3
∑
j 1=
3
∑
i 1=
3
∑=
N× Aij AijC
j 1=
N
∑= i
N
AijC
1)i j+ det Mij[ ] N 1–( ) N 1–( )×
ith jth
Mij[ ]
1–( )i j+ AijC
N
e 1–( )N!
20 20×
det A[ ]N N×
N 1–( ) N 1–( )×
N 2–( ) N 2–( )×
1 1×
3 3× i=1
2 A13
2 A23
2 A33
A11 det
A22 A23
A32 A33
A12 det
A21 A23
A31 A33
– A13 det
A21 A22
A31 A32
+=
i=2
15
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix AnalysisD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
After using Eq
expressions giv
Some key p
If is
then
If any ro
rows, the
 are 
For dete
Eq. (3.37) to re
or, using the su
to be summed (
This expression
indicial definiti
formula by
that
Here, there are
expanded out, t
not used to actu
mon for expres
fore essential f
so compactly.
det
A11 A1
A21 A2
A31 A3
det A[ ]T(
det A[ ][(
det α A[ ](
det A[ ]–(
B[ ]
det
A[ ]
3 3×
εpqrdet A[
εpqrdet A[
εpq
det A[ ] =
, (3.41). (3.35) to compute the submatrices, both of the above
e the same final result as Eq. (3.36).
roperties of the determinant are listed below:
 (3.42)
 (3.43)
 (3.44)
 (3.45)
 obtained by swapping two rows (or two columns) of ,
. (3.46)
w of can be written as a linear combination of the other
n . A special case is that if any two rows of
equal. (3.47)
rminants, the last two properties allow us to generalize
ad
 (3.48)
mmation convention in which repeated indices are understood
and, for clarity, now shown in red),
 (3.49)
is frequently cited in continuum mechanics textbooks as the
on of the determinant of a matrix. Multiplying the above
and summing over and (and using Eq. 3.33) reveals
 (3.50)
implied summations over the indices i,j,k,p,q, and r. If it were
he above expression would contain 729 terms, so it is obviously
ally compute the determinant. However, it is not at all uncom-
sions like this to show up in analytical analysis, and it is there-
or the analyst to recognize that the right-hand-side simplifies
2 A13
2 A23
2 A33
A21 det
A11 A13
A32 A33
– A22 det
A11 A13
A31 A33
A23 det
A11 A12
A31 A32
–+=
2 2×
) det A[ ]=
B]) det A[ ]( ) det B[ ]( )=
N N× ) αNdet A[ ]=
1) 1det A[ ]-----------------=
A[ ]
B[ ] det A[ ]–=
A[ ]
det A[ ]=0 det A[ ]=0
] εijk Api Aqj Ark
k 1=
3
∑
j 1=
3
∑
i 1=
3
∑=
] εijk Api Aqj Ark=
3 3×
r p q r
1
6---εpqr Api Aqj Arkεijk
16
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix Analysis D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
Principal su
A so-called
submatri
components of 
is a principal s
matrix, th
the diagonal c
principal
,
the matri
and so forth.
A principal
“nested minors
Matrix invar
The “ch
of all possible
ants are
Warning: if the
a complete set
of a symmetric
cussed later, it
metric matrix t
n n×
A11 A13
A31 A33
3 3×
3 3×
2 2× … N N×
2 2×
kth
I1 A11=
I2 det=
I3 det=
n o n
b-matrices and principal minors
principal submatrix of a square matrix is any
x (where ) whose diagonal components are also diagonal
the larger matrix. For example,
 (3.51)
ubmatrix, whereas is not a principal submatrix. For a
ere are three principal submatrices (identically equal to
omponents), three principal submatrices, and only one
submatrix (equal to the matrix itself). A sequence of ,
submatrices is nested if the matrix is a submatrix of
x, the matrix is a submatrix of the next larger submatrix,
minor is the determinant of any principal submatrix. The term
” means the determinants of a set of nested submatrices.
iants
aracteristic” invariant, denoted , of a matrix is the sum
principal minors. For a matrix, these three invari-
 (3.52)
 (3.53)
 (3.54)
matrix is non-symmetric, the characteristic invariants are not
of independent invariants. If all three characteristic invariants
matrix are zero, then the matrix itself is zero. However, as dis-
is possible for all three characteristic invariants of a non-sym-
o be zero without the matrix itself being zero.
n n× A[ ]N N×
n N≤
A12 A13
A22 A23
1 1×
2 2×
A[ ] 1 1×
1 1×
2 2×
Ik A[ ]
k k× 3 3×
A22 A33+ +
A11 A12
A21 A22
det
A11 A13
A31 A33
det
A22 A23
A32 A33
+ +
A11 A12 A13
A21 A22 A23
A31 A32 A33
17
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix AnalysisD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
Positive def
A square m
In indicial nota
Written out exp
Note that the m
Similarly, we c
so written. Th
the symmetric
ence on wheth
replace Eq. (3.5
 is pos
 is the
It can be sh
teristic invaria
Fortunately
have to verify t
culation is eas
evaluation of o
have to evaluat
* It is possible to 
not have all posi
v{ }T B[ ]
v
j 1=
N
∑
i 1=
N
∑
B11v1v1
B[ ]
A[ ]
inite
atrix is positive definite if and only if
 for all (3.55)
tion, this requirement is
 (3.56)
licitly for the special case of a matrix
 (3.57)
iddle two terms can be combined and written as .
an write the first term as . The third term can also be
us, the requirement for positive definiteness depends only on
part of the matrix . The non-symmetric part has no influ-
er or not a matrix is positive definite. Consequently, we may
5) by the equivalent, but more carefully crafted statement:
itive definite if and only if for all , where
 symmetric part of .
own that, a matrix is positive definite if and only if the charac-
nts of the symmetric part of the matrix are all positive.*
, there is an even simpler test for positive definiteness: you only
hat any nested set of principal minors are all positive! This cal-
ier than finding the invariants themselves because it requires
nly one principal minor determinant of each size (you don’t
e all of them).
construct a matrix that has all positive invariants, but whose symmetric part does
tive invariants.
B[ ]N N×
v{ } 0> v{ }
iBijv j 0>
2 2×
B12v1v2 B21v2v1 B22v2v2+ + + 0>
2
B12 B21+
2-------------------------   v2v1
2
B11 B11+
2-------------------------  
B[ ]
v{ }T A[ ] v{ } 0> v{ }
B[ ]
18
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix Analysis D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
The cofacto
Let d
matrix
(not to be confu
Eq. (3.39):
By virtue of
identically equ
the cofactor an
As a short han
to mean
the transpose).
Written more c
Written in mat
It turns out tha
sequential — t
results also hol
Inverse
The inverse
If the inverse e
matrix is
cient condition
zero:
A[ ]C
A[ ]N ×
AijC –(=
A[ ]C( )T
A[ ]CT
Aik A
k 1=
N
∑
Aik A
k 1=
N
∑
A[ ] A[ ]C
A[ ] A[ ]CT
A[ ] A[ ]–
A[ ]
det A[ ] ≠
n o n
r-determinate connection
enote the matrix of cofactors associated with a square
. This matrix is also sometimes called the adjugate matrix
sed with “adjoint”). Recall the definition of the cofactor given in
 (3.58)
Eq. (3.42), we note that the transpose of the cofactor matrix is
al to the cofactor matrix associated with . In other words,
d transpose operations commute:
 (3.59)
d, we generally eliminate the parentheses and simply write
the transpose of the cofactor (or, equivalently, the cofactor of
 The generalization of Eq. (3.38) is
 (3.60)
ompactly,
 (3.61)
rix form,
 (3.62)
t the location of the transpose and cofactor operations is incon-
he result will be the same in all cases. Namely, the following
d true:
 (3.63)
 of a matrix is the matrix denoted for which
 (3.64)
xists, then it is unique. If the inverse does not exist, then the
said to be “non-invertible” or “singular.” A necessary and suffi-
for the inverse to exist is that the determinant must be non-
 (3.65)
AijC
N
1)i j+ det Mij[ ] N 1–( ) N 1–( )×
A[ ]T
A[ ]T( )C=
jk
C 0 if i j≠
det A[ ] if i= j

=
jk
C det A[ ] δij=
T det A[ ]( ) I[ ]=
A[ ]C A[ ]T A[ ]T A[ ]C A[ ]CT A[ ] det A[ ]( ) I[ ]= = = =
A[ ] A[ ] 1–
1 A[ ] 1– A[ ] I[ ]= =
0
19
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix AnalysisD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
Comparing
computed from
While this defi
as a definition
generally nonz
Eigenvalues
As mention
vector of a squa
such that
solution, the de
determinant to
equation, for
is a matr
mend that you
nant equal to
opportunities t
ate the charactants of .
equation, alter
For
where
For
where
For , t
Higher dimens
A[ ] 1– =
A[ ]{
λ
3 3×
A[ ]
A[ ]2
λ2 I1λ–
I1
A[ ]3
λ3 I1λ2–
I1
I2 det
A
A
=
I3 det
A
A
A
=
A[ ]4 4×
Eqs. (3.63) and (3.64), we note that the inverse may be readily
 the cofactor by
 (3.66)
nition does uniquely define the inverse, it must never be used
of the cofactor matrix. The cofactor matrix is well-defined and
ero even if the matrix is singular.
 and eigenvectors
ed in Eq. (3.37), a nonzero vector (array) is called an eigen-
re matrix if there exists a scalar , called the eigenvalue,
. In order for this condition to have a non-trivial
terminant of the matrix must be zero. Setting this
zero results in a polynomial equation, called the characteristic
. If is a matrix, the equation will be quadratic. If
ix, the equation will be cubic, and so forth. We highly recom-
do not construct the matrix and then set its determi-
zero. While that would certainly work, it allows for too many
o make an arithmetic error. Instead, the fastest way to gener-
eristic equation is to first find all of the characteristic invari-
These invariants are the coefficients in the characteristic
nating sign, as follows
, the characteristic equation is
,
, and (3.67)
, the characteristic equation is
,
,
, and
 (3.68)
he characteristic equation is .
ion matrices are similar.
A[ ]CT
det A[ ]-----------------
A[ ]
p{ }
A[ ] λ
p} λ p{ }=
A[ ] λ I[ ]–
A[ ] 2 2× A[ ]
A[ ] λ I[ ]–
2×
I2+ 0=
A11 A22+= I2 det
A11 A12
A21 A22
=
3×
I2λ I3–+ 0=
A11 A22 A33+ +=
11 A12
21 A22
det
A11 A13
A31 A33
det
A22 A23
A32 A33
+ +
11 A12 A13
21 A22 A23
31 A32 A33
λ4 I1λ3– I2λ2 I3λ– I4+ + 0=
20
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix Analysis D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
Because th
matrix
there exists at
by solving
The solution fo
ric matrices, it
matrices, howe
below.
If an eigenv
equation gives
total of inde
might be fewer
metric, then it
dent eigenvecto
is greater than
span of these v
tors to any ort
matrices, it mig
a total of
metric multipli
Has an eigenva
ate eigenvector
Multiplying thi
The second equ
constraint that
multiplicity eq
braic multiplic
less than the a
is uniquely ass
this subspace r
tional vectors t
N N×
A[ ] p{ }i
m
µ m<
5 3
0 5
5 3
0 5
p1
p2
5 p1 3 p+
5 p2 5=
n o n
ere the characteristic equation is a polynomial equation, an
will have up to possible eigenvalues. For each solution
least one corresponding eigenvector , which is determined
(no sum on ). (3.69)
r will have an undetermined magnitude and, for symmet-
is conventional to set the magnitude to one. For non-symmetric
ver, the normalization convention is different, as explained
alue has algebraic multiplicity (i.e., if the characteristic
a root repeated times), then there can be no more than a
pendent eigenvectors associated with that eigenvalue — there
(though there is always at least one). If the matrix is sym-
is well known [22] that it is always possible to find indepen-
rs. The directions of the eigenvectors when the multiplicity
one are arbitrary. However, the one thing that is unique is the
ectors (see page 5), and it is conventional to set the eigenvec-
honormal set of vectors lying in the span. For non-symmetric
ht happen that an eigenvalue of multiplicity corresponds to
linearly independent eigenvectors, where is called the geo-
city. For example, the matrix
 (3.70)
lue with algebraic multiplicity of two. To find the associ-
(s), we must solve
 (3.71)
s out gives
 (3.72)
 (3.73)
ation gives us no information, and the first equation gives the
. Therefore, we have only one eigenvector (geometric
uals one) given by even though the eigenvalue had alge-
ity of two. When the geometric multiplicity of an eigenvector is
lgebraic multiplicity, then there does still exist a subspace that
ociated with the multiple eigenvalue. However, characterizing
equires solving a “generalized eigenproblem” to construct addi-
hat will combine with the one or more ordinary eigenvectors to
N λi
p{ }i
λi p{ }i= i
p{ }i
λi m
λi m
A[ ]
m
m
m
µ
λ 5=
5
p1
p2
=
2 5 p1=
p2
p2 0=
1 0,{ }
21
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix AnalysisD R A F T
R e b e c
c a B r a
n n o n
Copyright is reserved. Ind
form a set of ve
and we have n
for which findi
tion, so we will
in [23,21, 24].
be found via t
discussion belo
Similarity tran
ing algebraic m
columns conta
sary, to includ
multiplicity is
corresponding
into columns of
that the origina
If there are no
matrix is
eigenvalues. In
able.” If, on the
still contai
tain a “1” in th
vector. In this f
example, the si
This result can
composition[{{
the “1” in the
 must conta
The matrix
inal matrix
generalized eig
plicities will al
fully diagonal (
λ1 λ2 … λN, , ,{ }
A[ ] L[=
Λ[ ]
Λ[ ]
5 3
0 5
=
1
L[ ]
A[
ctors that span the space. The process for doing this is onerous,
ot yet personally happened upon any engineering application
ng these generalized eigenvectors provides any useful informa-
not cover the details. Instructions for the process can be found
If the generalized eigenvectors are truly sought, then they can
he “JordanDecomposition” command in Mathematica [25] (see
w to interpret the result).
sformations. Suppose that we have a set of eigenvalues
for a matrix , possibly with some of these eigenvalues hav-
ultiplicities greater than one. Let denote the matrix whose
in the corresponding eigenvectors (augmented, where neces-
e generalized eigenvectors for the cases where the geometric
less than the algebraic multiplicity; the ordinary eigenvectors
to a given eigenvalue should always, by convention, be entered
before the generalized eigenvectors). Then it can be shown
l matrix satisfies the similarity transformation
 (3.74)
generalized eigenvectors contained in the matrix , then the
diagonal, with the diagonal components being equal to the
this case, the original matrix is said to be “diagonaliz-
other hand, contains any generalized eigenvectors, then
ns the eigenvalues on the diagonal, but it additionally will con-
e position corresponding to each generalized eigen-
orm, the matrix is said to be in Jordan canonical form. For
milarity transformation corresponding to Eq. (3.70) is
 (3.75)
be obtained in Mathematica [25] via the command JordanDe-
5,3},{0,5}}]. From this result, we note that the presence of
position of the matrix implies that the second column of
in a generalized eigenvector.
will be orthogonal (i.e., ) if and only if the orig-
is symmetric. For symmetric matrices, there will never be any
envectors (i.e., the algebraic and geometric eigenvalue multi-
ways be equal), and the matrix will therefore always be
no “1” on any off-diagonal).
A[ ]
L[ ]
L[ ]
A[ ]
] Λ[ ] L[ ] 1–
L[ ]
A[ ]
L[ ]
k 1– k, kth
Λ[ ]
1 0
0 1 3⁄
5 1
0 5
1 0
0 1 3⁄
1–
2 Λ[ ]
L L[ ] 1– L[ ]T=
]
Λ[ ]
22
ividual copies may be made for personal use. No part of this document may be reproduced for profit.
September 10, 2002 2:37 pm
Matrix Analysis D R A F TR e b e c c a B r a n
Copyright is reserved. Ind
Finding eige
Recall that
eigenvalue. Th
Recall that we
wise us the
nant of is z
from which it
matrix, ,
the eigenvalue
for distinct eige
non-zero colum
easy way to fin
when the eigen
ity greater tha
an eigenvector,
ture all of the p
For the eigenva
the eigen
However, for t
itself,

Outros materiais