-
Notifications
You must be signed in to change notification settings - Fork 7
/
chapter8a.tex
1933 lines (1549 loc) · 110 KB
/
chapter8a.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Exterior product \label{sec:Exterior-product}}
In this chapter I introduce one of the most useful constructions in
basic linear algebra --- the exterior product, denoted by $\mathbf{a}\wedge\mathbf{b}$,
where $\mathbf{a}$ and $\mathbf{b}$ are vectors from a space $V$.
The basic idea of the exterior product is that we would like to define
an \emph{antisymmetric} and bilinear product of vectors. In other
words, we would like to have the properties $\mathbf{a}\wedge\mathbf{b}=-\mathbf{b}\wedge\mathbf{a}$
and $\mathbf{a}\wedge(\mathbf{b}+\lambda\mathbf{c})=\mathbf{a}\wedge\mathbf{b}+\lambda\mathbf{a}\wedge\mathbf{c}$.
\section{Motivation\label{sub:Motivation-for-exterior}}
Here I discuss, at some length, the motivation for introducing the
exterior product. The motivation is geometrical and comes from considering
the properties of areas and volumes in the framework of elementary
Euclidean geometry. I will proceed with a formal definition of the
exterior product in Sec.~\ref{sub:Definition-of-the-exterior}. In
order to understand the definition explained there, it is not necessary
to use this geometric motivation because the definition will be purely
algebraic. Nevertheless, I feel that this motivation will be helpful
for some readers.
\subsection{Two-dimen\-sion\-al oriented area\label{sub:Two-dimensional-oriented}}
We work in a two-dimen\-sion\-al Euclidean space, such as that considered
in elementary geometry. We assume that the usual geometrical definition
of the area of a parallelogram is known.
Consider the area $Ar(\mathbf{a},\mathbf{b})$ of a parallelogram
spanned by vectors $\mathbf{a}$ and $\mathbf{b}$. It is known from
elementary geometry that $Ar(\mathbf{a},\mathbf{b})=\left|\mathbf{a}\right|\cdot\left|\mathbf{b}\right|\cdot\sin\alpha$
where $\alpha$ is the angle between the two vectors, which is always
between 0 and $\pi$ (we do not take into account the orientation
of this angle). Thus defined, the area $Ar$ is always non-negative.
Let us investigate $Ar(\mathbf{a},\mathbf{b})$ as a function of the
vectors $\mathbf{a}$ and $\mathbf{b}$. If we stretch the vector
$\mathbf{a}$, say, by factor 2, the area is also increased by factor
2. However, if we multiply $\mathbf{a}$ by the number $-2$, the
area will be multiplied by $2$ rather than by $-2$:\[
Ar(\mathbf{a},2\mathbf{b})=Ar(\mathbf{a},-2\mathbf{b})=2Ar(\mathbf{a},\mathbf{b}).\]
Similarly, for some vectors $\mathbf{a},\mathbf{b},\mathbf{c}$ such
as shown in Fig.~\ref{fig:The-area-of2}, we have $Ar(\mathbf{a},\mathbf{b}+\mathbf{c})=Ar(\mathbf{a},\mathbf{b})+Ar(\mathbf{a},\mathbf{c})$.
However, if we consider $\mathbf{b}=-\mathbf{c}$ then we obtain \begin{align*}
Ar(\mathbf{a},\mathbf{b}+\mathbf{c}) & =Ar(\mathbf{a},0)=0\\
& \neq Ar(\mathbf{a},\mathbf{b})+Ar(\mathbf{a},-\mathbf{b})=2Ar(\mathbf{a},\mathbf{b}).\end{align*}
Hence, the area $Ar(\mathbf{a},\mathbf{b})$ is, strictly speaking,
\emph{not} a linear function of the vectors $\mathbf{a}$ and $\mathbf{b}$:
\begin{align*}
Ar(\lambda\mathbf{a},\mathbf{b}) & =\left|\lambda\right|Ar(\mathbf{a},\mathbf{b})\neq\lambda\, Ar(\mathbf{a},\mathbf{b}),\\
Ar(\mathbf{a},\mathbf{b}+\mathbf{c}) & \neq Ar(\mathbf{a},\mathbf{b})+Ar(\mathbf{a},\mathbf{c}).\end{align*}
Nevertheless, as we have seen, the properties of linearity hold in
\emph{some} cases. If we look closely at those cases, we find that
linearly holds precisely when we do not change the orientation of
the vectors. It would be more convenient if the linearity properties
held in all cases.
The trick is to replace the area function $Ar$ with the \textbf{oriented
area}\index{oriented area} function $A(\mathbf{a},\mathbf{b})$.
Namely, we define the function $A(\mathbf{a},\mathbf{b})$ by \[
A(\mathbf{a},\mathbf{b})=\pm\left|\mathbf{a}\right|\cdot\left|\mathbf{b}\right|\cdot\sin\alpha,\]
where the sign is chosen positive when the angle $\alpha$ is measured
from the vector $\mathbf{a}$ to the vector $\mathbf{b}$ in the counterclockwise
direction, and negative otherwise.
\paragraph{Statement:}
The oriented area $A(\mathbf{a},\mathbf{b})$ of a parallelogram spanned
by the vectors $\mathbf{a}$ and $\mathbf{b}$ in the two-dimen\-sion\-al
Euclidean space is an antisymmetric and bilinear function of the vectors
$\mathbf{a}$ and $\mathbf{b}$:\begin{align*}
A(\mathbf{a},\mathbf{b}) & =-A(\mathbf{b},\mathbf{a}),\\
A(\lambda\mathbf{a},\mathbf{b}) & =\lambda\, A(\mathbf{a},\mathbf{b}),\\
A(\mathbf{a},\mathbf{b}+\mathbf{c}) & =A(\mathbf{a},\mathbf{b})+A(\mathbf{a},\mathbf{c}).\qquad\text{(the sum law)}\end{align*}
%
\begin{figure}
\begin{centering}
\psfrag{0}{0}\psfrag{A}{$A$} \psfrag{B}{$B$} \psfrag{D}{$D$} \psfrag{C}{$C$} \psfrag{E}{$E$} \psfrag{v1}{$\mathbf{b}$} \psfrag{v2}{$\mathbf{a}$} \psfrag{v1lambda}{$\mathbf{b}+\alpha\mathbf{a}$}\includegraphics[width=3in]{./figs/v1v2-vol.eps}
\par\end{centering}
\caption{The area of the parallelogram $0ACB$ spanned by $\mathbf{a}$ and
$\mathbf{b}$ is equal to the area of the parallelogram $0ADE$ spanned
by $\mathbf{a}$ and $\mathbf{b}+\alpha\mathbf{a}$ due to the equality
of areas $ACD$ and $0BE$.\label{fig:The-area-of1}}
\end{figure}
\subparagraph{Proof:}
The first property is a straightforward consequence of the sign rule
in the definition of $A$.
Proving the second property requires considering the cases $\lambda>0$
and $\lambda<0$ separately. If $\lambda>0$ then the orientation
of the pair $\left(\mathbf{a},\mathbf{b}\right)$ remains the same
and then it is clear that the property holds: When we rescale $\mathbf{a}$
by $\lambda$, the parallelogram is stretched and its area increases
by factor $\lambda$. If $\lambda<0$ then the orientation of the
parallelogram is reversed and the oriented area changes sign.
To prove the sum law, we consider two cases: either $\mathbf{c}$
is parallel to $\mathbf{a}$ or it is not. If $\mathbf{c}$ is parallel
to $\mathbf{a}$, say $\mathbf{c}=\alpha\mathbf{a}$, we use Fig.~\ref{fig:The-area-of1}
to show that $A(\mathbf{a},\mathbf{b}+\lambda\mathbf{a})=A(\mathbf{a},\mathbf{b})$,
which yields the desired statement since $A(\mathbf{a},\lambda\mathbf{a})=0$.
If $\mathbf{c}$ is not parallel to $\mathbf{a}$, we use Fig.~\ref{fig:The-area-of2}
to show that $A(\mathbf{a},\mathbf{b}+\mathbf{c})=A(\mathbf{a},\mathbf{b})+A(\mathbf{a},\mathbf{c})$.
Analogous geometric constructions can be made for different possible
orientations of the vectors $\mathbf{a}$, $\mathbf{b}$, $\mathbf{c}$.\hfill{}$\blacksquare$
%
\begin{figure}
\begin{centering}
\psfrag{A}{$A$} \psfrag{B}{$B$} \psfrag{D}{$D$} \psfrag{C}{$C$} \psfrag{F}{$F$} \psfrag{E}{$E$} \psfrag{a}{$\mathbf{a}$} \psfrag{b}{$\mathbf{b}$} \psfrag{c}{$\mathbf{c}$}\psfrag{b+c}{$\mathbf{b}+\mathbf{c}$}\includegraphics[width=3in]{./figs/2darea}
\par\end{centering}
\caption{The area of the parallelogram spanned by $\mathbf{a}$ and $\mathbf{b}$
(equal to the area of $CEFD$) plus the area of the parallelogram
spanned by $\mathbf{a}$ and $\mathbf{c}$ (the area of $ACDB$) equals
the area of the parallelogram spanned by $\mathbf{a}$ and $\mathbf{b}+\mathbf{c}$
(the area of $AEFB$) because of the equality of the areas of $ACE$
and $BDF$.\label{fig:The-area-of2}}
\end{figure}
It is relatively easy to compute the oriented area because of its
algebraic properties. Suppose the vectors $\mathbf{a}$ and $\mathbf{b}$
are given through their components in a standard basis $\left\{ \mathbf{e}_{1},\mathbf{e}_{2}\right\} $,
for instance \[
\mathbf{a}=\alpha_{1}\mathbf{e}_{1}+\alpha_{2}\mathbf{e}_{2},\quad\mathbf{b}=\beta_{1}\mathbf{e}_{1}+\beta_{2}\mathbf{e}_{2}.\]
We assume, of course, that the vectors $\mathbf{e}_{1}$ and $\mathbf{e}_{2}$
are orthogonal to each other and have unit length, as is appropriate
in a Euclidean space. We also assume that the right angle is measured
from $\mathbf{e}_{1}$ to $\mathbf{e}_{2}$ in the counter-clockwise
direction, so that $A(\mathbf{e}_{1},\mathbf{e}_{2})=+1$. Then we
use the Statement and the properties $A(\mathbf{e}_{1},\mathbf{e}_{1})=0$,
$A(\mathbf{e}_{1},\mathbf{e}_{2})=1$, $A(\mathbf{e}_{2},\mathbf{e}_{2})=0$
to compute\begin{align*}
A(\mathbf{a},\mathbf{b}) & =A(\alpha_{1}\mathbf{e}_{1}+\alpha_{2}\mathbf{e}_{2},\beta_{1}\mathbf{e}_{1}+\beta_{2}\mathbf{e}_{2})\\
& =\alpha_{1}\beta_{2}A(\mathbf{e}_{1},\mathbf{e}_{2})+\alpha_{2}\beta_{1}A(\mathbf{e}_{2},\mathbf{e}_{1})\\
& =\alpha_{1}\beta_{2}-\alpha_{2}\beta_{1}.\end{align*}
The ordinary (unoriented) area is then obtained as the absolute value
of the oriented area, $Ar(\mathbf{a},\mathbf{b})=\left|A(\mathbf{a},\mathbf{b})\right|$.
It turns out that the oriented area, due to its strict linearity properties,
is a much more convenient and powerful construction than the unoriented
area.
\subsection{Parallelograms in $\mathbb{R}^{3}$ and in $\mathbb{R}^{n}$ \label{sub:Area-of-two-dimensional-parallelograms}}
Let us now work in the Euclidean space $\mathbb{R}^{3}$ with a standard
basis $\left\{ \mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\right\} $.
We can similarly try to characterize the area of a parallelogram spanned
by two vectors $\mathbf{a}$, $\mathbf{b}$. It is, however, not possible
to characterize the orientation of the area simply by a sign. We also
cannot use a geometric construction such as that in Fig.~\ref{fig:The-area-of2};
in fact it is \emph{not true} in three dimensions that the area spanned
by $\mathbf{a}$ and $\mathbf{b}+\mathbf{c}$ is equal to the sum
of $Ar(\mathbf{a},\mathbf{b})$ and $Ar(\mathbf{a},\mathbf{c})$.
Can we still define some kind of {}``oriented area'' that obeys
the sum law?
Let us consider Fig.~\ref{fig:The-area-of2} as a figure showing
the \emph{projection} of the areas of the three parallelograms onto
some coordinate plane, say, the plane of the basis vectors $\left\{ \mathbf{e}_{1},\mathbf{e}_{2}\right\} $.
It is straightforward to see that the projections of the areas obey
the sum law as oriented areas.
\paragraph{Statement:}
Let $\mathbf{a},\mathbf{b}$ be two vectors in $\mathbb{R}^{3}$,
and let $P(\mathbf{a},\mathbf{b})$ be the parallelogram spanned by
these vectors. Denote by $P(\mathbf{a},\mathbf{b})_{\mathbf{e}_{1},\mathbf{e}_{2}}$
the parallelogram within the coordinate plane $\text{Span}\left\{ \mathbf{e}_{1},\mathbf{e}_{2}\right\} $
obtained by projecting $P(\mathbf{a},\mathbf{b})$ onto that coordinate
plane, and similarly for the other two coordinate planes. Denote by
$A(\mathbf{a},\mathbf{b})_{\mathbf{e}_{1},\mathbf{e}_{2}}$ the oriented
area of $P(\mathbf{a},\mathbf{b})_{\mathbf{e}_{1},\mathbf{e}_{2}}$.
Then $A(\mathbf{a},\mathbf{b})_{\mathbf{e}_{1},\mathbf{e}_{2}}$ is
a bilinear, antisymmetric function of $\mathbf{a}$ and $\mathbf{b}$.
\subparagraph{Proof:}
The projection onto the coordinate plane of $\mathbf{e}_{1},\mathbf{e}_{2}$
is a linear transformation. Hence, the vector $\mathbf{a}+\lambda\mathbf{b}$
is projected onto the sum of the projections of $\mathbf{a}$ and
$\lambda\mathbf{b}$. Then we apply the arguments in the proof of
Statement~\ref{sub:Two-dimensional-oriented} to the \emph{projections}
of the vectors; in particular, Figs.~\ref{fig:The-area-of1} and~\ref{fig:The-area-of2}
are interpreted as showing the projections of all vectors onto the
coordinate plane $\mathbf{e}_{1},\mathbf{e}_{2}$. It is then straightforward
to see that all the properties of the oriented area hold for the projected
oriented areas. Details left as exercise.\hfill{}$\blacksquare$
It is therefore convenient to consider the oriented areas of the three
projections --- $A(\mathbf{a},\mathbf{b})_{\mathbf{e}_{1},\mathbf{e}_{2}}$,
$A(\mathbf{a},\mathbf{b})_{\mathbf{e}_{2},\mathbf{e}_{3}}$, $A(\mathbf{a},\mathbf{b})_{\mathbf{e}_{3},\mathbf{e}_{1}}$
--- as three components of a \emph{vector-valued} area $A(\mathbf{a},\mathbf{b})$
of the parallelogram spanned by $\mathbf{a},\mathbf{b}$. Indeed,
it can be shown that these three projected areas coincide with the
three Euclidean components of the vector product $\mathbf{a}\times\mathbf{b}$.
The vector product is the traditional way such areas are represented
in geometry: the vector $\mathbf{a}\times\mathbf{b}$ represents at
once the magnitude of the area and the orientation of the parallelogram.
One computes the unoriented area of a parallelogram as the length
of the vector $\mathbf{a}\times\mathbf{b}$ representing the oriented
area,\[
Ar(\mathbf{a},\mathbf{b})=\left[A(\mathbf{a},\mathbf{b})_{\mathbf{e}_{1},\mathbf{e}_{2}}^{2}+A(\mathbf{a},\mathbf{b})_{\mathbf{e}_{2},\mathbf{e}_{3}}^{2}+A(\mathbf{a},\mathbf{b})_{\mathbf{e}_{3},\mathbf{e}_{1}}^{2}\right]^{\frac{1}{2}}.\]
However, the vector product cannot be generalized to all higher-dimen\-sion\-al
spaces. Luckily, the vector product does not play an essential role
in the construction of the oriented area.
Instead of working with the vector product, we will generalize the
idea of projecting the parallelogram onto coordinate planes. Consider
a parallelogram spanned by vectors $\mathbf{a},\mathbf{b}$ in an
$n$-dimen\-sion\-al Euclidean space $V$ with the standard basis
$\left\{ \mathbf{e}_{1},...,\mathbf{e}_{n}\right\} $. While in three-dimen\-sion\-al
space we had just three projections (onto the coordinate planes $xy$,
$xz$, $yz$), in an $n$-dimen\-sion\-al space we have $\frac{1}{2}n(n-1)$
coordinate planes, which can be denoted by $\text{Span}\left\{ \mathbf{e}_{i},\mathbf{e}_{j}\right\} $
(with $1\leq i<j\leq n$). We may construct the $\frac{1}{2}n(n-1)$
projections of the parallelogram onto these coordinate planes. Each
of these projections has an oriented area; that area is a bilinear,
antisymmetric number-valued function of the vectors $\mathbf{a},\mathbf{b}$.
(The proof of the Statement above does not use the fact that the space
is \emph{three}-dimen\-sion\-al!) We may then regard these $\frac{1}{2}n(n-1)$
numbers as the components of a vector representing the oriented area
of the parallelogram. It is clear that all these components are needed
in order to describe the actual geometric \emph{orientation} of the
parallelogram in the $n$-dimen\-sion\-al space.
We arrived at the idea that the oriented area of the parallelogram
spanned by $\mathbf{a},\mathbf{b}$ is an antisymmetric, bilinear
function $A(\mathbf{a},\mathbf{b})$ whose value is a vector with
$\frac{1}{2}n(n-1)$ components, i.e.~a vector \emph{in a new space}
--- the {}``space of oriented areas,'' as it were. This space is
$\frac{1}{2}n(n-1)$-dimen\-sion\-al. We will construct this space
explicitly below; it is the space of bivectors, to be denoted by $\wedge^{2}V$.
We will see that the unoriented area of the parallelogram is computed
as the \emph{length} of the vector $A(\mathbf{a},\mathbf{b})$, i.e.~as
the square root of the sum of squares of the areas of the projections
of the parallelogram onto the coordinate planes. This is a generalization
of the Pythagoras theorem to areas in higher-dimen\-sion\-al spaces.
The analogy between ordinary vectors and vector-val\-ued areas can
be understood visually as follows. A straight line segment in an $n$-dimen\-sion\-al
space is represented by a vector whose $n$ components (in an orthonormal
basis) are the signed lengths of the $n$ projections of the line
segment onto the coordinate axes. (The components are \emph{signed},
or \emph{oriented}, i.e.~taken with a negative sign if the orientation
of the vector is opposite to the orientation of the axis.) The length
of a straight line segment, i.e.~the length of the vector $\mathbf{v}$,
is then computed as $\sqrt{\left\langle \mathbf{v},\mathbf{v}\right\rangle }$.
The scalar product $\left\langle \mathbf{v},\mathbf{v}\right\rangle $
is equal to the sum of squared lengths of the projections because
we are using an orthonormal basis. A parallelogram in space is represented
by a vector $\psi$ whose ${n \choose 2}$ components are the \emph{oriented}
areas of the ${n \choose 2}$ projections of the parallelogram onto
the coordinate planes. (The vector $\psi$ belongs to the space of
oriented areas, not to the original $n$-dimen\-sion\-al space.)
The numerical value of the area of the parallelogram is then computed
as $\sqrt{\left\langle \psi,\psi\right\rangle }$. The scalar product
$\left\langle \psi,\psi\right\rangle $ in the space of oriented areas
is equal to the sum of squared areas of the projections because the
${n \choose 2}$ unit areas in the coordinate planes are an orthonormal
basis (according to the definition of the scalar product in the space
of oriented areas).
The generalization of the Pythagoras theorem holds not only for areas
but also for higher-dimen\-sion\-al volumes. A general proof of
this theorem will be given in Sec.~\ref{proof-of-pythagoras}, using
the exterior product and several other constructions to be developed
below.
\section{Exterior product\label{sub:Definition-of-the-exterior}}
In the previous section I motivated the introduction of the antisymmetric
product by showing its connection to areas and volumes. In this section
I will give the definition and work out the properties of the exterior
product in a purely algebraic manner, without using any geometric
intuition. This will enable us to work with vectors in arbitrary dimensions,
to obtain many useful results, and eventually also to appreciate more
fully the geometric significance of the exterior product.
As explained in Sec.~\ref{sub:Area-of-two-dimensional-parallelograms},
it is possible to represent the oriented area of a parallelogram by
a vector in some auxiliary space. The oriented area is much more convenient
to work with because it is a \emph{bilinear} function of the vectors
$\mathbf{a}$ and $\mathbf{b}$ (this is explained in detail in Sec.~\ref{sub:Motivation-for-exterior}).
{}``Product'' is another word for {}``bilinear function.'' We
have also seen that the oriented area is an \emph{antisymmetric} function
of the vectors $\mathbf{a}$ and $\mathbf{b}$.
In three dimensions, an oriented area is represented by the cross
product $\mathbf{a}\times\mathbf{b}$, which is indeed an antisymmetric
and bilinear product. So we expect that the oriented area in higher
dimensions can be represented by some kind of new antisymmetric product
of $\mathbf{a}$ and $\mathbf{b}$; let us denote this product (to
be defined below) by $\mathbf{a}\wedge\mathbf{b}$, pronounced {}``a
wedge b.'' The value of $\mathbf{a}\wedge\mathbf{b}$ will be a vector
in a \emph{new} vector space. We will also construct this new space
explicitly.
\subsection{Definition of exterior product}
Like the tensor product space, the space of exterior products can
be defined solely by its algebraic properties. We can consider the
space of \emph{formal} \emph{expressions} like $\mathbf{a}\wedge\mathbf{b}$,
$3\mathbf{a}\wedge\mathbf{b}+2\mathbf{c}\wedge\mathbf{d}$, etc.,
and \emph{require} the properties of an antisymmetric, bilinear product
to hold.
Here is a more formal definition of the exterior product space: We
will construct an antisymmetric product {}``by hand,'' using the
tensor product space.
\paragraph{Definition 1:}
Given a vector space $V$, we define a new vector space $V\wedge V$
called the \textbf{exterior product}\index{exterior product} (or
antisymmetric tensor product, or alternating product, or \textbf{wedge
product}\index{wedge product}) of two copies of $V$. The space $V\wedge V$
is the subspace in $V\otimes V$ consisting of all \textbf{antisymmetric}
tensors, i.e.~tensors of the form\[
\mathbf{v}_{1}\otimes\mathbf{v}_{2}-\mathbf{v}_{2}\otimes\mathbf{v}_{1},\quad\mathbf{v}_{1,2}\in V,\]
and all linear combinations of such tensors. The exterior product
of two vectors $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$ is the expression
shown above; it is obviously an antisymmetric and bilinear function
of $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$.
For example, here is one particular element from $V\wedge V$, which
we write in two different ways using the axioms of the tensor product:\begin{align}
\left(\mathbf{u}+\mathbf{v}\right)\otimes\left(\mathbf{v}+\mathbf{w}\right)-\left(\mathbf{v}+\mathbf{w}\right)\otimes\left(\mathbf{u}+\mathbf{v}\right)=\mathbf{u}\otimes\mathbf{v}-\mathbf{v}\otimes\mathbf{u}\nonumber \\
+\mathbf{u}\otimes\mathbf{w}-\mathbf{w}\otimes\mathbf{u}+\mathbf{v}\otimes\mathbf{w}-\mathbf{w}\otimes\mathbf{v}\in V\wedge V.\label{eq:uvw calc 1}\end{align}
\subparagraph{Remark:}
A tensor $\mathbf{v}_{1}\otimes\mathbf{v}_{2}\in V\otimes V$ is not
equal to the tensor $\mathbf{v}_{2}\otimes\mathbf{v}_{1}$ if $\mathbf{v}_{1}\neq\mathbf{v}_{2}$.
This is so because there is no identity among the axioms of the tensor
product that would allow us to exchange the factors $\mathbf{v}_{1}$
and $\mathbf{v}_{2}$ in the expression $\mathbf{v}_{1}\otimes\mathbf{v}_{2}$.
\paragraph{Exercise 1:}
Prove that the {}``exchange map'' $\hat{T}\left(\mathbf{v}_{1}\otimes\mathbf{v}_{2}\right)\equiv\mathbf{v}_{2}\otimes\mathbf{v}_{1}$
is a canonically defined, linear map of $V\otimes V$ into itself.
Show that $\hat{T}$ has only two eigenvalues which are $\pm1$. Give
examples of eigenvectors with eigenvalues $+1$ and $-1$. Show that
the subspace $V\wedge V\subset V\otimes V$ is the eigenspace of the
exchange operator $\hat{T}$ with eigenvalue $-1$
\emph{Hint:} $\hat{T}\hat{T}=\hat{1}_{V\otimes V}$. Consider tensors
of the form $\mathbf{u}\otimes\mathbf{v}\pm\mathbf{v}\otimes\mathbf{u}$
as candidate eigenvectors of $\hat{T}$.\hfill{}$\blacksquare$
It is quite cumbersome to perform calculations in the tensor product
notation as we did in Eq.~(\ref{eq:uvw calc 1}). So let us write
the exterior product as $\mathbf{u}\wedge\mathbf{v}$ instead of $\mathbf{u}\otimes\mathbf{v}-\mathbf{v}\otimes\mathbf{u}$.
It is then straightforward to see that the {}``wedge'' symbol $\wedge$
indeed works like an anti-commutative multiplication, as we intended.
The rules of computation are summarized in the following statement.
\paragraph{Statement 1:}
One may save time and write $\mathbf{u}\otimes\mathbf{v}-\mathbf{v}\otimes\mathbf{u}\equiv\mathbf{u}\wedge\mathbf{v}\in V\wedge V$,
and the result of any calculation will be correct, as long as one
follows the rules:\begin{align}
\mathbf{u}\wedge\mathbf{v} & =-\mathbf{v}\wedge\mathbf{u},\label{eq:uv antisymm}\\
\left(\lambda\mathbf{u}\right)\wedge\mathbf{v} & =\lambda\left(\mathbf{u}\wedge\mathbf{v}\right),\\
\left(\mathbf{u}+\mathbf{v}\right)\wedge\mathbf{x} & =\mathbf{u}\wedge\mathbf{x}+\mathbf{v}\wedge\mathbf{x}.\label{eq:uv distrib}\end{align}
It follows also that $\mathbf{u}\wedge\left(\lambda\mathbf{v}\right)=\lambda\left(\mathbf{u}\wedge\mathbf{v}\right)$
and that $\mathbf{v}\wedge\mathbf{v}=0$. (These identities hold for
any vectors $\mathbf{u},\mathbf{v}\in V$ and any scalars $\lambda\in\mathbb{K}$.)
\subparagraph{Proof:}
These properties are direct consequences of the axioms of the tensor
product when applied to antisymmetric tensors. For example, the calculation~(\ref{eq:uvw calc 1})
now requires a simple expansion of brackets,\[
\left(\mathbf{u}+\mathbf{v}\right)\wedge\left(\mathbf{v}+\mathbf{w}\right)=\mathbf{u}\wedge\mathbf{v}+\mathbf{u}\wedge\mathbf{w}+\mathbf{v}\wedge\mathbf{w}.\]
Here we removed the term $\mathbf{v}\wedge\mathbf{v}$ which vanishes
due to the antisymmetry of $\wedge$. Details left as exercise.\hfill{}$\blacksquare$
Elements of the space $V\wedge V$, such as $\mathbf{a}\wedge\mathbf{b}+\mathbf{c}\wedge\mathbf{d}$,
are sometimes called \textbf{bivectors}\index{bivector}.%
\footnote{It is important to note that a bivector is not necessarily expressible
as a single-term product of two vectors; see the Exercise at the end
of Sec.~\ref{sub:Properties-of-the-ext-powers}.\index{single-term exterior products}%
} We will also want to define the exterior product of more than two
vectors. To define the exterior product of \emph{three} vectors, we
consider the subspace of $V\otimes V\otimes V$ that consists of antisymmetric
tensors of the form\begin{align}
\mathbf{a}\otimes\mathbf{b}\otimes\mathbf{c}-\mathbf{b}\otimes\mathbf{a}\otimes\mathbf{c}+\mathbf{c}\otimes\mathbf{a}\otimes\mathbf{b}-\mathbf{c}\otimes\mathbf{b}\otimes\mathbf{a}\nonumber \\
+\mathbf{b}\otimes\mathbf{c}\otimes\mathbf{a}-\mathbf{a}\otimes\mathbf{c}\otimes\mathbf{b}\label{eq:antisym 3}\end{align}
and linear combinations of such tensors. These tensors are called
\textbf{totally antisymmetric\index{totally antisymmetric}} because
they can be viewed as (tensor-valued) functions of the vectors $\mathbf{a},\mathbf{b},\mathbf{c}$
that change sign under exchange of any two vectors. The expression
in Eq.~(\ref{eq:antisym 3}) will be denoted for brevity by $\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}$,
similarly to the exterior product of two vectors, $\mathbf{a}\otimes\mathbf{b}-\mathbf{b}\otimes\mathbf{a}$,
which is denoted for brevity by $\mathbf{a}\wedge\mathbf{b}$. Here
is a general definition.
\paragraph{Definition 2:}
The \textbf{exterior product\index{exterior product} of $k$ copies}
of $V$ (also called the \textbf{$k$-th exterior power} of $V$)
is denoted by $\wedge^{k}V$ and is defined as the subspace of totally
antisymmetric tensors within $V\otimes...\otimes V$. In the concise
notation, this is the space spanned by expressions of the form\[
\mathbf{v}_{1}\wedge\mathbf{v}_{2}\wedge...\wedge\mathbf{v}_{k},\quad\mathbf{v}_{j}\in V,\]
assuming that the properties of the wedge product (linearity and antisymmetry)
hold as given by Statement~1. For instance, \begin{equation}
\mathbf{u}\wedge\mathbf{v}_{1}\wedge...\wedge\mathbf{v}_{k}=\left(-1\right)^{k}\mathbf{v}_{1}\wedge...\wedge\mathbf{v}_{k}\wedge\mathbf{u}\label{eq:uv pull}\end{equation}
({}``pulling a vector through $k$ other vectors changes sign $k$
times'').\hfill{}$\blacksquare$
The previously defined space of bivectors is in this notation $V\wedge V\equiv\wedge^{2}V$.
A natural extension of this notation is $\wedge^{0}V=\mathbb{K}$
and $\wedge^{1}V=V$. I will also use the following {}``wedge product''
notation,\[
\bigwedge_{k=1}^{n}\mathbf{v}_{k}\equiv\mathbf{v}_{1}\wedge\mathbf{v}_{2}\wedge...\wedge\mathbf{v}_{n}.\]
Tensors from the space $\wedge^{n}V$ are also called $n$-\textbf{vectors}\index{$n$-vectors}
or \textbf{antisymmetric tensors}\index{antisymmetric tensor} of
rank $n$.
\paragraph{Question:}
How to compute expressions containing multiple products such as $\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}$?
\subparagraph{Answer:}
Apply the rules shown in Statement~1. For example, one can permute
adjacent vectors and change sign,\[
\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}=-\mathbf{b}\wedge\mathbf{a}\wedge\mathbf{c}=\mathbf{b}\wedge\mathbf{c}\wedge\mathbf{a},\]
one can expand brackets,\[
\mathbf{a}\wedge(\mathbf{x}+4\mathbf{y})\wedge\mathbf{b}=\mathbf{a}\wedge\mathbf{x}\wedge\mathbf{b}+4\mathbf{a}\wedge\mathbf{y}\wedge\mathbf{b},\]
and so on. If the vectors $\mathbf{a},\mathbf{b},\mathbf{c}$ are
given as linear combinations of some basis vectors $\left\{ \mathbf{e}_{j}\right\} $,
we can thus reduce $\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}$ to
a linear combination of exterior products of basis vectors, such as
$\mathbf{e}_{1}\wedge\mathbf{e}_{2}\wedge\mathbf{e}_{3}$, $\mathbf{e}_{1}\wedge\mathbf{e}_{2}\wedge\mathbf{e}_{4}$,
etc.
\paragraph{Question:}
The notation $\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}$ suggests
that the exterior product is associative,\[
\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}=\left(\mathbf{a}\wedge\mathbf{b}\right)\wedge\mathbf{c}=\mathbf{a}\wedge(\mathbf{b}\wedge\mathbf{c}).\]
How can we make sense of this?
\subparagraph{Answer:}
If we want to be pedantic, we need to define the exterior product
operation $\wedge$ between a single-term bivector $\mathbf{a}\wedge\mathbf{b}$
and a vector $\mathbf{c}$, such that the result is \emph{by} \emph{definition}
the 3-vector $\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}$. We then
define the same operation on linear combinations of single-term bivectors,
\[
\left(\mathbf{a}\wedge\mathbf{b}+\mathbf{x}\wedge\mathbf{y}\right)\wedge\mathbf{c}\equiv\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}+\mathbf{x}\wedge\mathbf{y}\wedge\mathbf{c}.\]
Thus we have defined the exterior product between $\wedge^{2}V$ and
$V$, the result being a 3-vector from $\wedge^{3}V$. We then need
to verify that the results do not depend on the choice of the vectors
such as $\mathbf{a},\mathbf{b},\mathbf{x},\mathbf{y}$ in the representation
of a bivector: A different representation can be achieved only by
using the properties of the exterior product (i.e.~the axioms of
the tensor product), e.g.~we may replace $\mathbf{a}\wedge\mathbf{b}$
by $-\mathbf{b}\wedge\left(\mathbf{a}+\lambda\mathbf{b}\right)$.
It is easy to verify that any such replacements will not modify the
resulting 3-vector, e.g. \[
\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}=-\mathbf{b}\wedge\left(\mathbf{a}+\lambda\mathbf{b}\right)\wedge\mathbf{c},\]
again due to the properties of the exterior product. This consideration
shows that calculations with exterior products are consistent with
our algebraic intuition. We may indeed compute $\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}$
as $\left(\mathbf{a}\wedge\mathbf{b}\right)\wedge\mathbf{c}$ or as
$\mathbf{a}\wedge\left(\mathbf{b}\wedge\mathbf{c}\right)$.
\paragraph{Example~1:}
Suppose we work in $\mathbb{R}^{3}$ and have vectors $\mathbf{a}=\left(0,\frac{1}{2},-\frac{1}{2}\right)$,
$\mathbf{b}=\left(2,-2,0\right)$, $\mathbf{c}=\left(-2,5,-3\right)$.
Let us compute various exterior products. Calculations are easier
if we introduce the basis $\left\{ \mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\right\} $
explicitly:\[
\mathbf{a}=\frac{1}{2}\left(\mathbf{e}_{2}-\mathbf{e}_{3}\right),\quad\mathbf{b}=2(\mathbf{e}_{1}-\mathbf{e}_{2}),\quad\mathbf{c}=-2\mathbf{e}_{1}+5\mathbf{e}_{2}-3\mathbf{e}_{3}.\]
We compute the 2-vector $\mathbf{a}\wedge\mathbf{b}$ by using the
properties of the exterior product, such as $\mathbf{x}\wedge\mathbf{x}=0$
and $\mathbf{x}\wedge\mathbf{y}=-\mathbf{y}\wedge\mathbf{x}$, and
simply expanding the brackets as usual in algebra:\begin{align*}
\mathbf{a}\wedge\mathbf{b} & =\frac{1}{2}\left(\mathbf{e}_{2}-\mathbf{e}_{3}\right)\wedge2\left(\mathbf{e}_{1}-\mathbf{e}_{2}\right)\\
& =\left(\mathbf{e}_{2}-\mathbf{e}_{3}\right)\wedge\left(\mathbf{e}_{1}-\mathbf{e}_{2}\right)\\
& =\mathbf{e}_{2}\wedge\mathbf{e}_{1}-\mathbf{e}_{3}\wedge\mathbf{e}_{1}-\mathbf{e}_{2}\wedge\mathbf{e}_{2}+\mathbf{e}_{3}\wedge\mathbf{e}_{2}\\
& =-\mathbf{e}_{1}\wedge\mathbf{e}_{2}+\mathbf{e}_{1}\wedge\mathbf{e}_{3}-\mathbf{e}_{2}\wedge\mathbf{e}_{3}.\end{align*}
The last expression is the result; note that now there is nothing
more to compute or to simplify. The expressions such as $\mathbf{e}_{1}\wedge\mathbf{e}_{2}$
are the basic expressions out of which the space $\mathbb{R}^{3}\wedge\mathbb{R}^{3}$
is built. Below (Sec.~\ref{sub:Properties-of-the-ext-powers}) we
will show formally that the set of these expressions is a basis in
the space $\mathbb{R}^{3}\wedge\mathbb{R}^{3}$.
Let us also compute the 3-vector $\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}$,\begin{align*}
& \mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}=\left(\mathbf{a}\wedge\mathbf{b}\right)\wedge\mathbf{c}\\
& =\left(-\mathbf{e}_{1}\wedge\mathbf{e}_{2}+\mathbf{e}_{1}\wedge\mathbf{e}_{3}-\mathbf{e}_{2}\wedge\mathbf{e}_{3}\right)\wedge(-2\mathbf{e}_{1}+5\mathbf{e}_{2}-3\mathbf{e}_{3}).\end{align*}
When we expand the brackets here, terms such as $\mathbf{e}_{1}\wedge\mathbf{e}_{2}\wedge\mathbf{e}_{1}$
will vanish because \[
\mathbf{e}_{1}\wedge\mathbf{e}_{2}\wedge\mathbf{e}_{1}=-\mathbf{e}_{2}\wedge\mathbf{e}_{1}\wedge\mathbf{e}_{1}=0,\]
so only terms containing all different vectors need to be kept, and
we find\begin{align*}
\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c} & =3\mathbf{e}_{1}\wedge\mathbf{e}_{2}\wedge\mathbf{e}_{3}+5\mathbf{e}_{1}\wedge\mathbf{e}_{3}\wedge\mathbf{e}_{2}+2\mathbf{e}_{2}\wedge\mathbf{e}_{3}\wedge\mathbf{e}_{1}\\
& =\left(3-5+2\right)\mathbf{e}_{1}\wedge\mathbf{e}_{2}\wedge\mathbf{e}_{3}=0.\end{align*}
We note that all the terms are proportional to the 3-vector $\mathbf{e}_{1}\wedge\mathbf{e}_{2}\wedge\mathbf{e}_{3}$,
so only the coefficient in front of $\mathbf{e}_{1}\wedge\mathbf{e}_{2}\wedge\mathbf{e}_{3}$
was needed; then, by coincidence, that coefficient turned out to be
zero. So the result is the zero 3-vector.\hfill{}$\blacksquare$
\paragraph{Question:}
Our original goal was to introduce a bilinear, antisymmetric product
of vectors in order to obtain a geometric representation of oriented
areas. Instead, $\mathbf{a}\wedge\mathbf{b}$ was defined algebraically,
through tensor products. It is clear that $\mathbf{a}\wedge\mathbf{b}$
is antisymmetric and bilinear, but why does it represent an oriented
area?
\subparagraph{Answer:}
Indeed, it may not be immediately clear why oriented areas should
be elements of $V\wedge V$. We have seen that the oriented area $A(\mathbf{x},\mathbf{y})$
is an antisymmetric and bilinear function of the two vectors $\mathbf{x}$
and $\mathbf{y}$. Right now we have constructed the space $V\wedge V$
simply as the \emph{space of antisymmetric products}. By constructing
that space merely out of the axioms of the antisymmetric product,
we already covered \emph{every} \emph{possible} bilinear antisymmetric
product. This means that \emph{any} antisymmetric and bilinear function
of the two vectors $\mathbf{x}$ and $\mathbf{y}$ is proportional
to $\mathbf{x}\wedge\mathbf{y}$ or, more generally, is a \emph{linear}
\emph{function} of $\mathbf{x}\wedge\mathbf{y}$ (perhaps with values
in a different space). Therefore, the space of oriented areas (that
is, the space of linear combinations of $A(\mathbf{x},\mathbf{y})$
for various $\mathbf{x}$ and $\mathbf{y}$) is in any case mapped
to a subspace of $V\wedge V$. We have also seen that oriented areas
in $N$ dimensions can be represented through ${N \choose 2}$ projections,
which indicates that they are vectors in some ${N \choose 2}$-dimen\-sion\-al
space. We will see below that the space $V\wedge V$ has exactly this
dimension (Theorem~2 in Sec.~\ref{sub:Properties-of-the-ext-powers}).
Therefore, we can expect that the space of oriented areas coincides
with $V\wedge V$. Below we will be working in a space $V$ with a
scalar product, where the notions of area and volume are well defined.
Then we will see (Sec.~\ref{sub:Volumes-of-k-dimensional}) that
tensors from $V\wedge V$ and the higher exterior powers of $V$ indeed
correspond in a natural way to oriented areas, or more generally to
oriented volumes of a certain dimension.
\paragraph{Remark: Origin of the name {}``exterior.''}
The construction of the exterior product\index{exterior product!origin of the name}
is a modern formulation of the ideas dating back to H. Grassmann (1844).
A 2-vector $\mathbf{a}\wedge\mathbf{b}$ is interpreted geometrically
as the oriented area of the parallelogram spanned by the vectors $\mathbf{a}$
and $\mathbf{b}$. Similarly, a 3-vector $\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}$
represents the oriented 3-volume of a parallelepiped spanned by $\left\{ \mathbf{a},\mathbf{b},\mathbf{c}\right\} $.
Due to the antisymmetry of the exterior product, we have $(\mathbf{a}\wedge\mathbf{b})\wedge(\mathbf{a}\wedge\mathbf{c})=0$,
$(\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c})\wedge(\mathbf{b}\wedge\mathbf{d})=0$,
etc. We can interpret this geometrically by saying that the {}``product''
of two volumes is zero if these volumes have a vector in common. This
motivated Grassmann to call his antisymmetric product {}``exterior.''
In his reasoning, the product of two {}``extensive quantities''
(such as lines, areas, or volumes) is nonzero only when each of the
two quantities is geometrically {}``to the exterior'' (outside)
of the other.
\paragraph{Exercise 2:}
Show that in a \emph{two}-dimensional space $V$, any 3-vector such
as $\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c}$ can be simplified
to the zero 3-vector. Prove the same for $n$-vectors in $N$-dimensional
spaces when $n>N$.\hfill{}$\blacksquare$
One can also consider the exterior powers of the \emph{dual} space
$V^{*}$. Tensors from $\wedge^{n}V^{*}$ are usually (for historical
reasons) called $n$-\textbf{forms}\index{$n$-forms} (rather than
{}``$n$-covectors'').
\paragraph{Question:}
Where is the star here, really? Is the space $\wedge^{n}\left(V^{*}\right)$
different from $\left(\wedge^{n}V\right)^{*}$?
\subparagraph{Answer:}
Good that you asked. These spaces are canonically isomorphic, but
there is a subtle technical issue worth mentioning. Consider an example:
$\mathbf{a}^{*}\wedge\mathbf{b}^{*}\in\wedge^{2}(V^{*})$ can act
upon $\mathbf{u}\wedge\mathbf{v}\in\wedge^{2}V$ by the standard tensor
product rule, namely $\mathbf{a}^{*}\otimes\mathbf{b}^{*}$ acts on
$\mathbf{u}\otimes\mathbf{v}$ as \[
\left(\mathbf{a}^{*}\otimes\mathbf{b}^{*}\right)\left(\mathbf{u}\otimes\mathbf{v}\right)=\mathbf{a}^{*}(\mathbf{u})\,\mathbf{b}^{*}(\mathbf{v}),\]
so by using the definition of $\mathbf{a}^{*}\wedge\mathbf{b}^{*}$
and $\mathbf{u}\wedge\mathbf{v}$ through the tensor product, we find\begin{align*}
\left(\mathbf{a}^{*}\wedge\mathbf{b}^{*}\right)\left(\mathbf{u}\wedge\mathbf{v}\right) & =\left(\mathbf{a}^{*}\otimes\mathbf{b}^{*}-\mathbf{b}^{*}\otimes\mathbf{a}^{*}\right)\left(\mathbf{u}\otimes\mathbf{v}-\mathbf{v}\otimes\mathbf{u}\right)\\
& =2\mathbf{a}^{*}(\mathbf{u})\,\mathbf{b}^{*}(\mathbf{v})-2\mathbf{b}^{*}(\mathbf{u})\,\mathbf{a}^{*}(\mathbf{v}).\end{align*}
We got a \textbf{combinatorial} \textbf{factor}\index{combinatorial factor}
2, that is, a factor that arises because we have \emph{two} permutations
of the set $\left(\mathbf{a},\mathbf{b}\right)$. With $\wedge^{n}\left(V^{*}\right)$
and $\left(\wedge^{n}V\right)^{*}$ we get a factor $n!$. It is not
always convenient to have this combinatorial factor. For example,
in a finite number field the number $n!$ might be \emph{equal to
zero} for large enough $n$. In these cases we could \emph{redefine}
the action of $\mathbf{a}^{*}\wedge\mathbf{b}^{*}$ on $\mathbf{u}\wedge\mathbf{v}$
as \[
\left(\mathbf{a}^{*}\wedge\mathbf{b}^{*}\right)\left(\mathbf{u}\wedge\mathbf{v}\right)\equiv\mathbf{a}^{*}(\mathbf{u})\,\mathbf{b}^{*}(\mathbf{v})-\mathbf{b}^{*}(\mathbf{u})\,\mathbf{a}^{*}(\mathbf{v}).\]
If we are not working in a finite number field, we are able to divide
by any integer, so we may keep combinatorial factors in the denominators
of expressions where such factors appear. For example, if $\left\{ \mathbf{e}_{j}\right\} $
is a basis in $V$ and $\omega=\mathbf{e}_{1}\wedge...\wedge\mathbf{e}_{N}$
is the corresponding basis tensor in the one-dimen\-sion\-al space
$\wedge^{N}V$, the dual basis tensor in $\left(\wedge^{N}V\right)^{*}$
could be defined by \[
\omega^{*}=\frac{1}{N!}\mathbf{e}_{1}^{*}\wedge...\wedge\mathbf{e}_{N}^{*},\quad\text{so that}\:\omega^{*}(\omega)=1.\]
The need for such combinatorial factors is a minor technical inconvenience
that does not arise too often. We may give the following definition
that avoids dividing by combinatorial factors (but now we use permutations;
see Appendix~\ref{sub:Properties-of-permutations}).
\paragraph{Definition 3:}
The action of a $k$-form $\mathbf{f}_{1}^{*}\wedge...\wedge\mathbf{f}_{k}^{*}$
on a $k$-vector $\mathbf{v}_{1}\wedge...\wedge\mathbf{v}_{k}$ is
defined by\[
\sum_{\sigma}(-1)^{\left|\sigma\right|}\mathbf{f}_{1}^{*}(\mathbf{v}_{\sigma(1)})...\mathbf{f}_{k}^{*}(\mathbf{v}_{\sigma(k)}),\]
where the summation is performed over all permutations $\sigma$ of
the ordered set $\left(1,...,k\right)$.
\paragraph{Example~2:}
With $k=3$ we have\begin{align*}
& (\mathbf{p}^{*}\wedge\mathbf{q}^{*}\wedge\mathbf{r}^{*})(\mathbf{a}\wedge\mathbf{b}\wedge\mathbf{c})\\
& =\mathbf{p}^{*}(\mathbf{a})\mathbf{q}^{*}(\mathbf{b})\mathbf{r}^{*}(\mathbf{c})-\mathbf{p}^{*}(\mathbf{b})\mathbf{q}^{*}(\mathbf{a})\mathbf{r}^{*}(\mathbf{c})\\
& +\mathbf{p}^{*}(\mathbf{b})\mathbf{q}^{*}(\mathbf{c})\mathbf{r}^{*}(\mathbf{a})-\mathbf{p}^{*}(\mathbf{c})\mathbf{q}^{*}(\mathbf{b})\mathbf{r}^{*}(\mathbf{a})\\
& +\mathbf{p}^{*}(\mathbf{c})\mathbf{q}^{*}(\mathbf{a})\mathbf{r}^{*}(\mathbf{b})-\mathbf{p}^{*}(\mathbf{c})\mathbf{q}^{*}(\mathbf{b})\mathbf{r}^{*}(\mathbf{a}).\end{align*}
\paragraph{Exercise 3:}
a) Show that $\mathbf{a}\wedge\mathbf{b}\wedge\omega=\omega\wedge\mathbf{a}\wedge\mathbf{b}$
where $\omega$ is any antisymmetric tensor (e.g.~$\omega=\mathbf{x}\wedge\mathbf{y}\wedge\mathbf{z}$).
b) Show that\[
\omega_{1}\wedge\mathbf{a}\wedge\omega_{2}\wedge\mathbf{b}\wedge\omega_{3}=-\omega_{1}\wedge\mathbf{b}\wedge\omega_{2}\wedge\mathbf{a}\wedge\omega_{3},\]
where $\omega_{1}$, $\omega_{2}$, $\omega_{3}$ are arbitrary antisymmetric
tensors and $\mathbf{a},\mathbf{b}$ are vectors.
c) Due to antisymmetry, $\mathbf{a}\wedge\mathbf{a}=0$ for any vector
$\mathbf{a}\in V$. Is it also true that $\omega\wedge\omega=0$ for
any bivector $\omega\in\wedge^{2}V$?
\subsection{{*} Symmetric tensor product}
\paragraph{Question:}
At this point it is still unclear why the antisymmetric definition
is at all useful. Perhaps we could define something else, say the
symmetric product, instead of the exterior product? We could try to
define a product, say $\mathbf{a}\odot\mathbf{b}$, with some other
property, such as\[
\mathbf{a}\odot\mathbf{b}=2\mathbf{b}\odot\mathbf{a}.\]
\subparagraph{Answer:}
This does not work because, for example, we would have\[
\mathbf{b}\odot\mathbf{a}=2\mathbf{a}\odot\mathbf{b}=4\mathbf{b}\odot\mathbf{a},\]
so all the {}``$\odot$'' products would have to vanish.
We can define the \emph{symmetric} tensor product, $\otimes_{S}$,
with the property\[
\mathbf{a}\otimes_{S}\mathbf{b}=\mathbf{b}\otimes_{S}\mathbf{a},\]
but it is impossible to define anything else in a similar fashion.%
\footnote{This is a theorem due to Grassmann (1862).%
}
The antisymmetric tensor product is the eigenspace (within $V\otimes V$)
of the exchange operator $\hat{T}$ with eigenvalue $-1$. That operator
has only eigenvectors with eigenvalues $\pm1$, so the only other
possibility is to consider the eigenspace with eigenvalue $+1$. This
eigenspace is spanned by symmetric tensors of the form $\mathbf{u}\otimes\mathbf{v}+\mathbf{v}\otimes\mathbf{u}$,
and can be considered as the space of symmetric tensor products. We
could write\[
\mathbf{a}\otimes_{S}\mathbf{b}\equiv\mathbf{a}\otimes\mathbf{b}+\mathbf{b}\otimes\mathbf{a}\]
and develop the properties of this product. However, it turns out
that the symmetric tensor product is much less useful for the purposes
of linear algebra than the antisymmetric subspace. This book derives
most of the results of linear algebra using the antisymmetric product
as the main tool!
\section{Properties of spaces $\wedge^{k}V$\label{sec:Properties-of-the-wedgekV}}
As we have seen, tensors from the space $V\otimes V$ are representable
by linear combinations of the form $\mathbf{a}\otimes\mathbf{b}+\mathbf{c}\otimes\mathbf{d}+...$,
but not \emph{uniquely} representable because one can transform one
such linear combination into another by using the axioms of the tensor
product. Similarly, $n$-vectors are not uniquely representable by
linear combinations of exterior products. For example,\[
\mathbf{a}\wedge\mathbf{b}+\mathbf{a}\wedge\mathbf{c}+\mathbf{b}\wedge\mathbf{c}=(\mathbf{a}+\mathbf{b})\wedge(\mathbf{b}+\mathbf{c})\]
since $\mathbf{b}\wedge\mathbf{b}=0$. In other words, the 2-vector
$\omega\equiv\mathbf{a}\wedge\mathbf{b}+\mathbf{a}\wedge\mathbf{c}+\mathbf{b}\wedge\mathbf{c}$
has an alternative representation containing only a single-term exterior
product, $\omega=\mathbf{r}\wedge\mathbf{s}$ where $\mathbf{r}=\mathbf{a}+\mathbf{b}$
and $\mathbf{s}=\mathbf{b}+\mathbf{c}$.
\paragraph{Exercise:\index{single-term exterior products}}
Show that any 2-vector in a \emph{three}-dimen\-sion\-al space is
representable by a single-term exterior product, i.e.~to a 2-vector
of the form $\mathbf{a}\wedge\mathbf{b}$.
\emph{Hint}: Choose a basis $\left\{ \mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\right\} $
and show that $\alpha\mathbf{e}_{1}\wedge\mathbf{e}_{2}+\beta\mathbf{e}_{1}\wedge\mathbf{e}_{3}+\gamma\mathbf{e}_{2}\wedge\mathbf{e}_{3}$
is equal to a single-term product.\hfill{}$\blacksquare$
What about higher-dimen\-sion\-al spaces? We will show (see the
Exercise at the end of Sec.~\ref{sub:Properties-of-the-ext-powers})
that $n$-vectors cannot be in general reduced to a single-term product.
This is, however, always possible for $(N-1)$-vectors in an $N$-dimen\-sion\-al
space. (You showed this for $N=3$ in the exercise above.)
\paragraph{Statement:}
Any $(N-1)$-vector in an $N$-dimen\-sion\-al space can be written
as a single-term exterior product of the form $\mathbf{a}_{1}\wedge...\wedge\mathbf{a}_{N-1}$.
\subparagraph{Proof:}
We prove this by using induction in $N$. The basis of induction is
$N=2$, where there is nothing to prove. The induction step: Suppose
that the statement is proved for $(N-1)$-vectors in $N$-dimen\-sion\-al
spaces, we need to prove it for $N$-vectors in $(N+1)$-dimen\-sion\-al
spaces. Choose a basis $\left\{ \mathbf{e}_{1},...,\mathbf{e}_{N+1}\right\} $
in the space. Any $N$-vector $\omega$ can be written as a linear
combination of exterior product terms,\begin{align*}
\omega & =\alpha_{1}\mathbf{e}_{2}\wedge...\wedge\mathbf{e}_{N+1}+\alpha_{2}\mathbf{e}_{1}\wedge\mathbf{e}_{3}\wedge...\wedge\mathbf{e}_{N+1}+...\\
& \quad+\alpha_{N}\mathbf{e}_{1}\wedge...\wedge\mathbf{e}_{N-1}\wedge\mathbf{e}_{N+1}+\alpha_{N+1}\mathbf{e}_{1}\wedge...\wedge\mathbf{e}_{N},\end{align*}
where $\left\{ \alpha_{i}\right\} $ are some constants.
Note that any tensor $\omega\in\wedge^{N-1}V$ can be written in this
way simply by expressing every vector through the basis and by expanding
the exterior products. The result will be a linear combination of
the form shown above, containing at most $N+1$ single-term exterior
products of the form $\mathbf{e}_{1}\wedge...\wedge\mathbf{e}_{N}$,
$\mathbf{e}_{2}\wedge...\wedge\mathbf{e}_{N+1}$, and so on. We do
not yet know whether these single-term exterior products constitute
a linearly independent set; this will be established in Sec.~\ref{sub:Properties-of-the-ext-powers}.
Presently, we will not need this property.
Now we would like to transform the expression above to a single term.
We move $\mathbf{e}_{N+1}$ outside brackets in the first $N$ terms:\begin{align*}
\omega & =\big(\alpha_{1}\mathbf{e}_{2}\wedge...\wedge\mathbf{e}_{N}+...+\alpha_{N}\mathbf{e}_{1}\wedge...\wedge\mathbf{e}_{N-1}\big)\wedge\mathbf{e}_{N+1}\\
& \qquad+\alpha_{N+1}\mathbf{e}_{1}\wedge...\wedge\mathbf{e}_{N}\\
& \equiv\psi\wedge\mathbf{e}_{N+1}+\alpha_{N+1}\mathbf{e}_{1}\wedge...\wedge\mathbf{e}_{N},\end{align*}
where in the last line we have introduced an auxiliary $(N-1)$-vector
$\psi$. If it happens that $\psi=0$, there is nothing left to prove.
Otherwise, at least one of the $\alpha_{i}$ must be nonzero; without
loss of generality, suppose that $\alpha_{N}\neq0$ and rewrite $\omega$
as \[
\omega=\psi\wedge\mathbf{e}_{N+1}+\alpha_{N+1}\mathbf{e}_{1}\wedge...\wedge\mathbf{e}_{N}=\psi\wedge\big(\mathbf{e}_{N+1}+\frac{\alpha_{N+1}}{\alpha_{N}}\mathbf{e}_{N}\big).\]
Now we note that $\psi$ belongs to the space of $\left(N-1\right)$-vectors
over the $N$-dimen\-sion\-al subspace spanned by $\left\{ \mathbf{e}_{1},...,\mathbf{e}_{N}\right\} $.
By the inductive assumption, $\psi$ can be written as a single-term
exterior product, $\psi=\mathbf{a}_{1}\wedge...\wedge\mathbf{a}_{N-1}$,
of some vectors $\left\{ \mathbf{a}_{i}\right\} $. Denoting \[
\mathbf{a}_{N}\equiv\mathbf{e}_{N+1}+\frac{\alpha_{N+1}}{\alpha_{N}}\mathbf{e}_{N},\]
we obtain \[
\omega=\mathbf{a}_{1}\wedge...\wedge\mathbf{a}_{N-1}\wedge\mathbf{a}_{N},\]
i.e. $\omega$ can be represented as a single-term exterior product.\hfill{}$\blacksquare$
\subsection{Linear maps between spaces $\wedge^{k}V$\label{sub:Linear-maps-between-spaces}}
Since the spaces $\wedge^{k}V$ are vector spaces, we may consider
linear maps between them.
A simplest example is a map\[
L_{\mathbf{a}}:\omega\mapsto\mathbf{a}\wedge\omega,\]
mapping $\wedge^{k}V\rightarrow\wedge^{k+1}V$; here the vector $\mathbf{a}$
is \emph{fixed}. It is important to check that $L_{\mathbf{a}}$ is
a \emph{linear} map between these spaces. How do we check this? We
need to check that $L_{\mathbf{a}}$ maps a linear combination of
tensors into linear combinations; this is easy to see,\begin{align*}
L_{\mathbf{a}} & (\omega+\lambda\omega^{\prime})=\mathbf{a}\wedge(\omega+\lambda\omega')\\
& =\mathbf{a}\wedge\omega+\lambda\mathbf{a}\wedge\omega'=L_{\mathbf{a}}\omega+\lambda L_{\mathbf{a}}\omega'.\end{align*}
Let us now fix a covector $\mathbf{a}^{*}$. A covector is a map $V\rightarrow\mathbb{K}$.
In Lemma~2 of Sec.~\ref{sub:Dimension-of-tensor} we have used covectors
to define linear maps $\mathbf{a}^{*}:V\otimes W\rightarrow W$ according
to Eq.~(\ref{eq:fg rule}), mapping $\mathbf{v}\otimes\mathbf{w}\mapsto\mathbf{a}^{*}\left(\mathbf{v}\right)\mathbf{w}$.
Now we will apply the analogous construction to exterior powers and
construct a map $V\wedge V\rightarrow V$. Let us denote this map
by $\iota_{\mathbf{a}^{*}}$.
It would be incorrect to define the map $\iota_{\mathbf{a}^{*}}$
by the formula $\iota_{\mathbf{a}^{*}}(\mathbf{v}\wedge\mathbf{w})=\mathbf{a}^{*}\left(\mathbf{v}\right)\mathbf{w}$
because such a definition does not respect the antisymmetry of the
wedge product and thus violates the linearity condition, \[
\iota_{\mathbf{a}^{*}}\left(\mathbf{w}\wedge\mathbf{v}\right)\,{\lyxbuildrel!\above=}\,\iota_{\mathbf{a}^{*}}\left(\left(-1\right)\mathbf{v}\wedge\mathbf{w}\right)=-\iota_{\mathbf{a}^{*}}\left(\mathbf{v}\wedge\mathbf{w}\right)\neq\mathbf{a}^{*}(\mathbf{v})\mathbf{w}.\]
So we need to act with $\mathbf{a}^{*}$ on \emph{each} of the vectors
in a wedge product and make sure that the correct minus sign comes
out. An acceptable formula for the map $\iota_{\mathbf{a}^{*}}:\wedge^{2}V\rightarrow V$
is\[
\iota_{\mathbf{a}^{*}}\left(\mathbf{v}\wedge\mathbf{w}\right)\equiv\mathbf{a}^{*}\left(\mathbf{v}\right)\mathbf{w}-\mathbf{a}^{*}\left(\mathbf{w}\right)\mathbf{v}.\]
(Please check that the linearity condition now holds!) This is how
we will define the map $\iota_{\mathbf{a}^{*}}$ on $\wedge^{2}V$.
Let us now extend $\iota_{\mathbf{a}^{*}}:\wedge^{2}V\rightarrow V$
to a map \[
\iota_{\mathbf{a}^{*}}:\wedge^{k}V\rightarrow\wedge^{k-1}V,\]
defined as follows: \begin{align}
\iota_{\mathbf{a}^{*}}\mathbf{v} & \equiv\mathbf{a}^{*}(\mathbf{v}),\nonumber \\
\iota_{\mathbf{a}^{*}}(\mathbf{v}\wedge\omega) & \equiv\mathbf{a}^{*}(\mathbf{v})\omega-\mathbf{v}\wedge(\iota_{\mathbf{a}^{*}}\omega).\label{eq:inductive}\end{align}
This definition is \emph{inductive}, i.e.~it shows how to define
$\iota_{\mathbf{a}^{*}}$ on $\wedge^{k}V$ if we know how to define
it on $\wedge^{k-1}V$. The action of $\iota_{\mathbf{a}^{*}}$ on
a sum of terms is defined by requiring linearity, \[
\iota_{\mathbf{a}^{*}}\left(A+\lambda B\right)\equiv\iota_{\mathbf{a}^{*}}\left(A\right)+\lambda\iota_{\mathbf{a}^{*}}\left(B\right),\quad A,B\in\wedge^{k}V.\]
We can convert this inductive definition into a more explicit formula:
if $\omega=\mathbf{v}_{1}\wedge...\wedge\mathbf{v}_{k}\in\wedge^{k}V$
then \begin{align*}
\iota_{\mathbf{a}^{*}} & (\mathbf{v}_{1}\wedge...\wedge\mathbf{v}_{k})\equiv\mathbf{a}^{*}(\mathbf{v}_{1})\mathbf{v}_{2}\wedge...\wedge\mathbf{v}_{k}-\mathbf{a}^{*}(\mathbf{v}_{2})\mathbf{v}_{1}\wedge\mathbf{v}_{3}\wedge...\wedge\mathbf{v}_{k}\\
& +...+\left(-1\right)^{k-1}\mathbf{a}^{*}(\mathbf{v}_{k})\mathbf{v}_{1}\wedge...\wedge\mathbf{v}_{k-1}.\end{align*}
This map is called the \textbf{interior product}\index{interior product}
or the \textbf{insertion} map\index{insertion map}. This is a useful
operation in linear algebra. The insertion map $\iota_{\mathbf{a}^{*}}\psi$
{}``inserts'' the covector $\mathbf{a}^{*}$ into the tensor $\psi\in\wedge^{k}V$
by acting with $\mathbf{a}^{*}$ on each of the vectors in the exterior
product that makes up $\psi$.
Let us check formally that the insertion map is linear.
\paragraph{Statement:}
The map $\iota_{\mathbf{a}^{*}}:\wedge^{k}V\rightarrow\wedge^{k-1}V$
for $1\leq k\leq N$ is a well-defined linear map, according to the
inductive definition.
\subparagraph{Proof:}
First, we need to check that it maps linear combinations into linear
combinations; this is quite easy to see by induction, using the fact
that $\mathbf{a}^{*}:V\rightarrow\mathbb{K}$ is linear. However,
this type of linearity is not sufficient; we also need to check that
the \emph{result} of the map, i.e.~the tensor $\iota_{\mathbf{a}^{*}}(\omega)$,
is defined \emph{independently} \emph{of} \emph{the} \emph{representation}
of $\omega$ through vectors such as $\mathbf{v}_{i}$. The problem
is, there are many such representations, for example some tensor $\omega\in\wedge^{3}V$
might be written using different vectors as \[
\omega=\mathbf{v}_{1}\wedge\mathbf{v}_{2}\wedge\mathbf{v}_{3}=\mathbf{v}_{2}\wedge(\mathbf{v}_{3}-\mathbf{v}_{1})\wedge(\mathbf{v}_{3}+\mathbf{v}_{2})\equiv\tilde{\mathbf{v}}_{1}\wedge\tilde{\mathbf{v}}_{2}\wedge\tilde{\mathbf{v}}_{3}.\]
We need to verify that any such equivalent representation yields
the same resulting tensor $\iota_{\mathbf{a}^{*}}(\omega)$, despite
the fact that the definition of $\iota_{\mathbf{a}^{*}}$ \emph{appears}
to depend on the choice of the vectors $\mathbf{v}_{i}$. Only then
will it be proved that $\iota_{\mathbf{a}^{*}}$ is a linear map $\wedge^{k}V\rightarrow\wedge^{k-1}V$.
An equivalent representation of a tensor $\omega$ can be obtained
only by using the properties of the exterior product, namely linearity
and antisymmetry. Therefore, we need to verify that $\iota_{\mathbf{a}^{*}}(\omega)$
does not change when we change the representation of $\omega$ in
these two ways: 1) expanding a linear combination,\begin{equation}
(\mathbf{x}+\lambda\mathbf{y})\wedge...\mapsto\mathbf{x}\wedge...+\lambda\mathbf{y}\wedge...;\label{eq:change repr 1}\end{equation}
2) interchanging the order of two vectors in the exterior product
and change the sign,\begin{equation}
\mathbf{x}\wedge\mathbf{y}\wedge...\mapsto-\mathbf{y}\wedge\mathbf{x}\wedge...\label{eq:change repr 2}\end{equation}
It is clear that $\mathbf{a}^{*}(\mathbf{x}+\lambda\mathbf{y})=\mathbf{a}^{*}(\mathbf{x})+\lambda\mathbf{a}^{*}(\mathbf{y})$;
it follows by induction that $\iota_{\mathbf{a}^{*}}\omega$ does
not change under a change of representation of the type~(\ref{eq:change repr 1}).
Now we consider the change of representation of the type~(\ref{eq:change repr 2}).
We have, by definition of $\iota_{\mathbf{a}^{*}}$,\[
\iota_{\mathbf{a}^{*}}(\mathbf{v}_{1}\wedge\mathbf{v}_{2}\wedge\chi)=\mathbf{a}^{*}(\mathbf{v}_{1})\mathbf{v}_{2}\wedge\chi-\mathbf{a}^{*}(\mathbf{v}_{2})\mathbf{v}_{1}\wedge\chi+\mathbf{v}_{1}\wedge\mathbf{v}_{2}\wedge\iota_{\mathbf{a}^{*}}(\chi),\]
where we have denoted by $\chi$ the rest of the exterior product.
It is clear from the above expression that \[
\iota_{\mathbf{a}^{*}}(\mathbf{v}_{1}\wedge\mathbf{v}_{2}\wedge\chi)=-\iota_{\mathbf{a}^{*}}(\mathbf{v}_{2}\wedge\mathbf{v}_{1}\wedge\chi)=\iota_{\mathbf{a}^{*}}(-\mathbf{v}_{2}\wedge\mathbf{v}_{1}\wedge\chi).\]
This proves that $\iota_{\mathbf{a}^{*}}(\omega)$ does not change
under a change of representation of $\omega$ of the type~(\ref{eq:change repr 2}).
This concludes the proof.\hfill{}$\blacksquare$
\paragraph{Remark:}
It is apparent from the proof that the \emph{minus sign} in the inductive
definition~(\ref{eq:inductive}) is crucial for the linearity of
the map $\iota_{\mathbf{a}^{*}}$. Indeed, if we attempt to define
a map by a formula such as\[
\mathbf{v}_{1}\wedge\mathbf{v}_{2}\mapsto\mathbf{a}^{*}(\mathbf{v}_{1})\mathbf{v}_{2}+\mathbf{a}^{*}(\mathbf{v}_{2})\mathbf{v}_{1},\]
the result will \emph{not} be a linear map $\wedge^{2}V\rightarrow V$
despite the appearance of linearity. The correct formula must take
into account the fact that $\mathbf{v}_{1}\wedge\mathbf{v}_{2}=-\mathbf{v}_{2}\wedge\mathbf{v}_{1}$.
\paragraph{Exercise:}
Show by induction in $k$ that\[
L_{\mathbf{x}}\iota_{\mathbf{a}^{*}}\omega+\iota_{\mathbf{a}^{*}}L_{\mathbf{x}}\omega=\mathbf{a}^{*}(\mathbf{x})\omega,\quad\forall\omega\in\wedge^{k}V.\]
In other words, the linear operator $L_{\mathbf{x}}\iota_{\mathbf{a}^{*}}+\iota_{\mathbf{a}^{*}}L_{\mathbf{x}}:\wedge^{k}V\rightarrow\wedge^{k}V$
is simply the multiplication by the number $\mathbf{a}^{*}(\mathbf{x})$.
\paragraph{}
\subsection{Exterior product and linear dependence\label{sub:Properties-of-the-ext-powers}}
The exterior product is useful in many ways. One powerful property
of the exterior product is its close relation to linear independence
of sets of vectors. For example, if $\mathbf{u}=\lambda\mathbf{v}$
then $\mathbf{u}\wedge\mathbf{v}=0$. More generally:
\paragraph{Theorem 1:}
A set $\left\{ \mathbf{v}_{1},...,\mathbf{v}_{k}\right\} $ of vectors
from $V$ is linearly independent if and only if $(\mathbf{v}_{1}\wedge\mathbf{v}_{2}\wedge...\wedge\mathbf{v}_{k})\neq0$,
i.e.~it is a nonzero tensor from $\wedge^{k}V$.
\subparagraph{Proof:}
If $\left\{ \mathbf{v}_{j}\right\} $ is linearly dependent then without
loss of generality we may assume that $\mathbf{v}_{1}$ is a linear
combination of other vectors, $\mathbf{v}_{1}=\sum_{j=2}^{k}\lambda_{j}\mathbf{v}_{j}$.
Then \begin{align*}
\mathbf{v}_{1}\wedge\mathbf{v}_{2}\wedge...\wedge\mathbf{v}_{k} & =\sum_{j=2}^{k}\lambda_{j}\mathbf{v}_{j}\wedge\mathbf{v}_{2}\wedge...\wedge\mathbf{v}_{j}\wedge...\wedge\mathbf{v}_{k}\\
& =\sum_{j=2}^{k}\left(-1\right)^{j-1}\mathbf{v}_{2}\wedge...\mathbf{v}_{j}\wedge\mathbf{v}_{j}\wedge...\wedge\mathbf{v}_{k}=0.\end{align*}
Conversely, we need to prove that the tensor $\mathbf{v}_{1}\wedge...\wedge\mathbf{v}_{k}\neq0$
if $\left\{ \mathbf{v}_{j}\right\} $ is linearly \emph{in}dependent.
The proof is by induction in $k$. The basis of induction is $k=1$:
if $\left\{ \mathbf{v}_{1}\right\} $ is linearly independent then
clearly $\mathbf{v}_{1}\neq0$. The induction step: Assume that the
statement is proved for $k-1$ and that $\left\{ \mathbf{v}_{1},...,\mathbf{v}_{k}\right\} $
is a linearly independent set. By Exercise~1 in Sec.~\ref{sub:Dual-vector-space}
there exists a covector $\mathbf{f}^{*}\in V^{*}$ such that $\mathbf{f}^{*}\left(\mathbf{v}_{1}\right)=1$
and $\mathbf{f}^{*}\left(\mathbf{v}_{i}\right)=0$ for $2\leq i\leq k$.
Now we apply the interior product map $\iota_{\mathbf{f}^{*}}:\wedge^{k}V\rightarrow\wedge^{k-1}V$
constructed in Sec.~\ref{sub:Linear-maps-between-spaces} to the