Lecture notes for Math 55a: Honors Abstract Algebra (Fall 2016)

If you find a mistake, omission, etc., please let me know by e-mail.

The orange balls mark our current location in the course, and the current problem set.

Ceci n’est pas un Math 55a syllabus
[No, you don’t have to know French to take Math 55a. Googling ceci+n'est suffices to turn up the explanation, such as it is.]
Update 9/9: We now have two CA’s for the class: Wyatt Mackey (wmackey@college) and Andrew (Andy) Gordon (andrewgordon@college)
[if writing from outside the Harvard network, append .college.edu to ...@harvard].
! If you are coming to class but not officially registered for Math 55 (e.g. you are auditing, or still undecided between 25a and 55a but officially signed up for 25a), send me your e-mail address so that I and the CA's can include you in class announcements.
The Sep.13 office hours will be an hour later than usual, that is, 8:30 to 10:00 PM in the Lowell House Dining Hall, due to Lowell’s Sophomore Dinner happening earlier that evening.
CA office hours are Wednesdays 8 to 10 PM in the Leverett House Dining Hall (same location as Math Nights), starting September 14.
Wyatt will hold section Wednesdays 2-3 PM in Science Center 304. Andy will hold section Fridays 2-3 PM, room TBA in Science Center 310.
! First Quiz Friday, October 7 in class. Andy will hold a review section Thursday, October 6, at 8PM in Science Center Room 304 in place of his usual Friday section.
Also, no class Monday, Oct. 10 (University holiday); I will lecture Wednesday the 12th, but on a scenic detour from the main sequence of 55a topics so as not to disadvantage students observing the Yom Kippur fast. (The talk was on linear error-correcting codes, which were the hidden(?) agenda of some of the recent homework problems. While I’m at it, here is the interlude on 5777, Jewish leap years, etc., with probably too much additional information.) Thus the due date for Problem Set 5 is postponed from Friday the 7th (the quiz date) to Wednesday the 12.
! Office Hours moved to Thursday starting the week of October 31 (paralleling the shift in the due dates of problem sets from Friday to Monday). So November 3 instead of November 1, etc.
! Second Quiz Friday, December 2 in class. Andy will hold a review section Thursday, December 1, at 7PM in Science Center Room 304 in place of his usual Friday section. I'll hold office hours 8:00-9:30 at Lowell House (not the usual 7:30-9:00, to avoid conflict with Andy's section), in addition to Tuesday's office hours.

At least in the beginning of the linear algebra unit, we’ll be following the Axler textbook closely enough that supplementary lecture notes should not be needed. Some important extensions/modifications to the treatment in Axler: While the third edition of Axler includes quotients and duality, it still lacks tensor algebra. This is no surprise, but it will not stop us in Math 55! Here’s an introduction [As you might guess from \oplus, the TeXism for the tensor-product symbol is \otimes.] We’ll define the determinant of an operator T on a finite dimensional space V as follows: T induces a linear operator n(T) on the top exterior power n(V) of V (where n = dim V); this exterior power is one-dimensional, so an operator on it is multiplication by some scalar; det(T) is by definition the scalar corresponding to n(T). The “top exterior power” is a subspace of the “exterior algebra” (V) of V, which is the quotient of the tensor algebra by the ideal generated by {vv: v in V}. We’ll still have to construct the sign homomorphism from the symmetry group of order dim(V) to {1,−1} to make sure that this exterior algebra is as large as we expect it to be, and that in particular that the (dim(V))-th exterior power has dimension 1 rather than zero.

Interlude: normal subgroups; short exact sequences in the context of groups: A subgroup H of G is normal (satisfies H = g−1Hg for all g in G) iff H is the kernel of some group homomorphism from G iff the injection HG fits into a short exact sequence {1} → HGQ → {1}, in which case Q is the quotient group G/H. [The notation {1} for the one-element (“trivial”) group is usually abbreviated to plain 1, as in 1 → HGQ → 1.] This is not in Axler but can be found in any introductory text in abstract algebra; see for instance Artin, Chapter 2, section 10. Examples: 1 → AnSn → {±1} → 1; also, the determinant homomorphism GLn(F) → F* gives the short exact sequence 1 → SLn(F) → GLn(F) → F* → 1, and this works even if F is just a commutative ring with unit as long as F* is understood as the group of invertible elements of F — for example, Z* = {±1}.

Some more tidbits about exterior algebra:

More will be said about exterior algebra when differential forms appear in Math 55b.

We’ll also show that a symmetric (or Hermitian) matrix is positive definite iff all its eigenvalues are positive iff it has positive principal minors (the “principal minors” are the determinants of the square submatrices of all orders containing the (1,1) entry). More generally we’ll show that the eigenvalue signs determine the signature, as does the sequence of signs of principal minors if they are all nonzero. More precisely: an invertible symmetric/Hermitian matrix has signature (r,s) where r is the number of positive eigenvalues and s is the number of negative eigenvalues; if its principal minors are all nonzero then r is the number of j in {1,2,…,n} such that the j-th and (j−1)-st minors have the same sign, and s is the number of j in that range such that the j-th and (j−1)-st minors have opposite sign [for j=1 we always count the “zeroth minor” as being the positive number 1]. This follows inductively from the fact that the determinant has sign (−1)s and the signature (r',s') of the restriction of a pairing to a subspace has r' ≤ r and s' ≤ s.

For positive definiteness, we have the two further equivalent conditions: the symmetric (or Hermitian) matrix A=(aij) is positive definite iff there is a basis (vi) of Fn such that aij = ⟨vi,vj for all i,j, and iff there is an invertible matrix B such that A=B*B. For example, the matrix with entries 1/(i+j−1) (“Hilbert matrix”) is positive-definite, because it is the matrix of inner products (integrals on [0,1]) of the basis 1,x,x2,…,xn−1 for the polynomials of degree <n. See the 10th problem set for a calculus-free proof of the positivity of the Hilbert matrix, and an evaluation of its determinant.
For any matrix B (not necessarily invertible or even square) with columns vi, the matrix B*B with entries aij = ⟨vi,vj is known as the Gram matrix of the columns of B. It is invertible iff those columns are linearly independent. If we add an (n+1)-st vector, the determinant of the Gram matrix increases by a factor equal to the squared distance between this vector and the span of the columns of B.

  • All of Chapter 8 works over an arbitrary algebraically closed field, not only over C (except for the minor point about extracting square roots, which breaks down in characteristic 2); and the first section (“Generalized Eigenvalues”) works over any field.
  • More about nilpotent operators: let T be any operator on a vector space V over a field F, not assumed algebraically closed. If V is finite-dimensional, then The Following Are Equivalent:
              (1) There exists a nonnegative integer k such that Tk=0;
              (2) For any vector v, there exists a nonnegative integer k such that Tkv=0;
              (3) Tn=0, where n=dim(V).
    Note that (1) and (2) make no mention of the dimension, but are still not equivalent for operators on infinite-dimensional spaces. (For example, consider differentiation on the R-vector space R[x].) We readily deduce the further equivalent conditions:
              (4) There exists a basis for V for which T has an upper-triangular matrix with every diagonal entry equal zero;
              (5) Every upper-triangular matrix for T has zeros on the diagonal, and there exists at least one upper-triangular matrix for T.
    Recall that the second part of (5) is automatic if F is algebraically closed.
  • The space of generalized 0-eigenvectors (the maximal subspace on which T is nilpotent) is sometimes called the nilspace of T. It is an invariant subspace. When V is finite dimensional, V is the direct sum of the nilspace and another invariant subspace V', consisting of the intersection of the subspaces Tk(V) as k ranges over all positive integers (8.5). This can be used to prove Cayley-Hamilton using the standard definition of the characteristic polynomial as det(xIT).
  • An example in infinite dimension when (8.5) fails: V is the real vector space of continuous functions from R to R, and T is multiplication by x. [That is a useful counterexample for many other aspects of “eigenstuff” when we try to go beyond finite dimension; for example, there are no eigenvectors, but for every real number λ the operator λIT is not invertible!]
  • The dimension of the space of generalized c-eigenvalues (i.e., of the nilspace of T−cI) is usually called the algebraic multiplicity of c (since it’s the multiplicity of c as a root of the characteristic polynomial of T), to distinguish it from the “geometric multiplicity” which is the dimension of ker(T−cI).

    Our source for representation theory of finite groups (on finite-dimensional vector spaces over C) will be Artin’s Algebra, Chapter 9. We’ll omit sections 3 and 10 (which require not just topology and calculus but also, at least for §3, some material beyond 55b to do properly, namely the construction of Haar measures); also we won’t spend much time on §7, which works out in detail the representation theory of a specific group that Artin calls I (the icosahedral group, a.k.a. A5). There are many other sources for this material, some of which take a somewhat different point of view via the “group algebra” C[G] of a finite group G (a.k.a. the algebra of functions on G under convolution). See for instance Chapter 1 of Representation Theory by Fulton and Harris (mentioned in class). A canonical treatment of representations of finite groups is Serre’s Linear Representations of Finite Groups, which is the only entry for this chapter in the list of “Suggestions for Further Reading” at the end of Artin’s book (see p.604).

    While we’ll work almost exclusively over C, most of the results work equally well (though with somewhat different proofs) over any field F that contains the roots of unity of order #(G), as long as the characteristic of F is not a factor of #(G). Without roots of unity, many more results are different, but there is still a reasonably satisfactory theory available. Dropping the characteristic condition leads to much trickier territory, e.g. even Maschke’s theorem (every finite-dimensional representation is a direct sum of irreducibles) fails; some natural problems are still unsolved!

    Here’s an alternative viewpoint on representations of a finite group G (not in Artin, though you can find it elsewhere, e.g. Fulton-Harris pages 36ff.): a representation of G over a field F is equivalent to a module for the group ring F[G]. The group ring is an associative F-algebra (commutative iff G is commutative) that consists of the formal F-linear combinations of group elements. This means that F[G] is FG as an F-vector space, and the algebra structure is defined by setting eg1eg2 = eg1g2 for all g1, g2 in G, together with the F-bilinearity of the product. This means that if we identify elements of the group ring with functions GF then the multiplication rule is (f1 * f2)(g) = Σg1g2=g f1(g1) f2(g2) — yes, it’s convolution again. To identify an F[G]-module with a representation, use the action of F to define the vector space structure, and let ρ(g) act by multipliction by the unit vector eg. In particular, the regular representation is F[G] regarded in the usual way as a module over itself. If we identify the image of this representation with certain permutation matrices of order #(G), we get an explicit model of F[G] as a subalgebra of the algebra of square matrices the same order. For example, if G = Z/nZ we recover the algebra of circulant matrices of order n.

    First problem set / Linear Algebra I: vector space basics; an introduction to convolution rings
    Clarifications (2.ix.16):
    • “Which if any of these basic results would fail if F were replaced by Z?” — but don’t worry about this for problems 7 and 24, which specify R.
    • Problem 12: If you see how to compute this efficiently but not what this has to do with Problem 8, please keep looking for the connection.
    Here’s the “Proof of Concept” mini-crossword with links concerning the ∎ symbol. Here’s an excessively annotated solution.

    Second problem set / Linear Algebra II: dimension of vector spaces; torsion groups/modules and divisible groups
    About Problem 5: You may wonder: if not determinants, what can you use? See Axler, Chapter 4, namely 4.8 through 4.12 (pages 121–123), and note that the proof of 4.8 (using techniques we won’t cover till next week) can be replaced by the ordinary algorithm for polynomial long division, which you probably learned with real coefficients but works over any field. While I’m at it, 4.7 (page 120) works over any infinite field; Axler’s proof is special to the real and complex numbers, but 4.12 yields the result in general. (We already remarked that this result does not hold for finite fields.)

    Third problem set / Linear Algebra III: Countable vs. uncountable dimension of vector spaces; linear transformations and duality

    Fourth problem set / Linear Algebra IV: Duality; projective spaces; more connections with polynomials

    Fifth problem set / Linear Algebra V: “Eigenstuff” (with a prelude on exact sequences and more duality)
    corrected 4.x.10:
    Problem 2: FN / V*, not VN / V* (noted by many students);
    Problem 5ii: all α in F, not all α in V (noted by Hikari Sorensen).
    Due date extended from Oct.7 (Fri.) to Oct.12 (Wed.) in class

    Sixth problem set / Linear Algebra VI: Tensors, eigenstuff cont’d, and a bit on inner products

    Seventh problem set / Linear Algebra VII: Pairings and inner products, cont’d

    Eighth problem set / Linear Algebra VIII: The spectral theorem; spectral graph theory; symplectic structures

    Ninth problem set / Linear Algebra IX: Trace, determinant, and more exterior algebra

    Tenth problem set: Linear Algebra X (determinants and distances); representations of finite abelian groups (Discrete Fourier transform)

    Eleventh and final problem set: Representations of finite abelian groups