\( \newcommand{\IM}{\text{Im}\,} \) \( \newcommand{\coker}{\text{coker}\,} \)

Manifolds III

Cohomology

Cleaning Up My Previous Post



I ended my previous blog post by wrapping up the basics of covectors and \(k\)-forms on a manifold. To most, these concepts can be abstract and daunting since very little physical interpretation is provided. I'm hoping, however, that after a bit of discussion on integration theory, we'll be well-prepared to understand the physical interpretations provided in Riemannian manifolds.

That said, I mentioned the concepts of pullbacks and integration in my last post without really going into details. After hopelessly trying to write the mechanics of cohomology without these two tools, I realized that they are essential prerequisites and thus should be covered first.

The first thing I want to define (which I really should have put in the last blog post) is known as the exterior derivative. Recall that a differential \(1\)-form takes a vector field (which we can think of as some sort of flux) and assigns a real-valued scalar to it. Therefore, our \(1\)-forms can be thought of as tools which measure the flux across the boundary of a smooth manifold. The purpose of our exterior derivative is to sum the net flux over a two-dimensional set with respect to some \(1\)-form. As one may expect, this results in a map which takes \(1\)-forms to \(2\)-forms.

Formally, given a \(k\)-form \(\omega = \sum_{I}\alpha_I\,dx^I\), we define the exterior derivative to be the (\(k+1\))-form

$$ d\omega = \sum_{I} d\alpha_I \wedge dx^I $$

(recall from last time that our \(\alpha_I\) are real-valued functions on \(M\), so \(d\alpha_j = \sum \frac{\partial \alpha}{\partial x^i} dx^i\)).

A visual interpretation of how the exterior derivative affects a vector field
Left: the form \(\omega = x\,dy\) over the vector field \(X = \partial y\), where blue corresponds to negative and red corresponds to positive
Right: The form \(d\omega = dx \wedge dy\). Imagine the dots are simply vectors coming out of the screen.

For example, consider the vector field \(X = \partial y\) together with the \(1\)-from \(\omega = x\,dy\). For each vector in \(X\), \(\omega\) assigns some sort of value which we will picture as thickness. Given some open region, \(U\), we can think of our \(1\)-form as the amount of flux through the boundary. On the other hand, the exterior derivative \(d\omega = dx \wedge dy\) is measuring the net flux through the surface itself.

For those who have taken calculus (which I'm guessing is everyone reading this article, but anyone who hasn't and still made it this far — bien joué), a common term which you've probably heard is "anti-derivative". After learning a bit about the fundamental theorem of calculus and how integration on \(\mathbb{R}^n\) looks like differentiating backwards, this nomenclature begins to make sense. However, in differential geometry the term anti-derivative is entirely different than the notion introduced by calculus. Often in higher-level mathematics, the prefix "anti" indicates that reversing some property results in the negation of the original property. For example, we say that a binary operation \(\oplus\) over a set \(S\) is commutative if $$ x \oplus y = y \oplus x$$ for all \(x, y \in S\), and anti-commutative if $$ x \oplus y = - ( y \oplus x) $$

Formally, given a graded algebra \(R = \bigoplus_{n=1}^\infty R_n\) over a field \(K\), mathematicians call a \(K\)-linear map \(D: R \to R\) a derivation if it satisfies:

$$ D(\alpha \beta) = D(\alpha)\beta + \alpha D(\beta) $$

and call it an anti-derivation if it satisfies:

$$ D(\alpha \beta) = D(\alpha)\beta + (-1)^k \alpha D(\beta)$$

for any \(\alpha \in R_k\) and \( \beta \in R_l\). We say that a derivation (resp. anti-derivation) \(D\) is of degree \(n\) if, for any \(\gamma \in R_m\), we have \(D(\gamma) \in R_{m + n}\) (that is, \(D\) increments the degree of our ring by \(n\)).

Hopefully the notation \(\bigoplus\) above didn't completely throw a wrench in things for you — the \(\oplus\) symbol simply represents whatever our abstracted binary operation is over our field \(K\). In our case, it is the wedge produce (\(\wedge\)), so you will often see me use \(\Omega^*(M) = \bigwedge_{k=1}^\infty \Omega^k(M)\) instead.

There will only be one anti-derivation we care about in this blog series and, as you may have guessed, it is the exterior derivative.

Before discussing my next topic, I want to go over a few important facts about the exterior derivative that will help us when it comes time for cohomology.

Lemma:
The exterior derivative \(d: \Omega^*(M) \to \Omega^*(M)\) is an anti-derivation of degree \(1\).

Theorem:
Given an (at least) \(C^2\) manifold \(M\), the exterior derivative satisfies \(d \circ d = 0\).

Now that we have a good idea how differential forms behave on a manifolds, how do we translate a \(k\)-form on one manifold to another? In order to do something like that, we need what is called a pullback. Recall that given a map \(F: M \to N\) between manifolds and vector field \(X\) on \(T_pM\), we defined the pushforward \(F_*: TM \to TN\) pointwise by defining the map \(F_{*,p}: T_pM \to T_{F(p)}N \) to satisfy

$$ (F_{*, p}(X_p))f := X_p(f \circ F) $$

where \(f\) is any representative real-valued function on \(N\). We define the codifferential, or pullback, to be the dual to the pushforward. Given a map \(F: M \to N\) between manifolds and \(k\)-form \(\omega \in \Omega^k(N)\), we define the pullback \(F^*: \Omega^k(N) \to \Omega^k(M)\) pointwise by requiring the map \(F^*_p: A_k(T_{F(p)}N) \to A_k(T_pM)\) to satisfy

$$ (F^*\omega)_p (v_1, \dots, v_k) := \omega_{F(p)} (F_{*,p}v_1, \dots, F_{*, p}v_k) $$

We're originally given a \(k\)-form on \(N\) which takes \(k\) tangent vectors in \(TN\) to \(\mathbb{R}\), so by using the properties of our pushforward we can allow input vectors to be on \(TM\) now instead of \(TN\)! Since this function takes \(k\) input vectors on \(TM\), is both \(k\)-linear and alternating (inherited from \(\omega\)), and outputs some scalar in \(\mathbb{R}\), it must be a \(k\)-form by definition. The reader can think of the pushforward as a composition of \(\omega\) with \(F_*\) — however, this doesn't entirely make sense due to the fact that the domains and codomains don't align ( \(F_*\) would need to map to \(k\)-copies of \(T_{F(p)}N\)).

Notice that the pullback actually swaps the original mapping order of our function. That is, given a function \(F:M \to N\) between manifolds, we have that \(F^*: T^*N \to T^*M\) is a map from the cotangent bundle of \(N\) to the tangent bundle of \(M\). For any category theorists, this makes the pullback operator \((^*)\) a contravariant functor from the category of manifolds to the category of alternating sections.

For example, suppose we have the function \(f: \mathbb{R}^2 \to \mathbb{R}^3\) defined by \(F(u, v) = (u, uv, u^2v^3)\) and \(1\)-form \(\omega = z\,dx + xy^2\,dy-x\,dz\). Then we can use \(F\) sort of like a change of variables from calculus:

$$ \begin{align} F^*\omega &= u^2v^3\,du + u(uv)^2\,d(uv) - u\,d(u^2v^3) \\&= u^2v^3\,du + u^3v^2(v\,du + u\,dv) - u(2uv^3\,du +3v^2u^2\,dv) \\&= (-u^2v^3 + u^3v^3)\,du + (u^4v^2 -3v^2u^3)\,dv \end{align} $$

Before moving onto orientations, boundaries, and integrals, I want to provide a helpful theorem regarding the pullback:

Theorem:
Let \(M\) and \(N\) be smooth manifolds, and let \(F: M \to N\) be smooth. Then \(F^*(d \omega) = dF^*(\omega)\) for any \(k\)-form \(\omega\).





Orientations, Boundaries, and Integrals



Now that we are caught up on derivations and pullbacks, the first thing I want to introduce is a fairly straightforward concept that is often taught in high school physics courses: orientation. Consider a traditional 12-hour analog clock — there are exactly two ways for the hour and minute hand to traverse the clock (i.e. clockwise and counter-clockwise). This should seem fairly obvious since \(S^1\) is a \(1\)-dimensional manifold and we know that there are only two directions in \(1\) dimension. So how many possible orientations are there in higher dimensions? As we will come to see, the answer is two as well!

Before expounding on the technicalities, the rationale for a shape of arbitrary dimension having only two orientations in higher dimensions is as follows: an orientation on an \((n+1)\)-dimensional object induces an orientation on any \(n\)-dimensional subcomponent. The easiest way for me to portray this is through simplical complexes:

An orientation on the 1, 2, and 3 simplex
Orientation on a 1-simplex, 2-simplex, and 3-simplex

Say we have a 1-simplex (which is just a line); when we choose some orientation on our line, any 2-simplex containing it must have two other lines. The orientation on the two other lines can either agree (shown in blue above) with our original line, or disagree (shown in red above) — if the orientation of all the lines agree, we have an orientation on our 2-simplex (i.e. triangle).

Things get a bit trickier when we define an orientation on our 3-simplex, however. Unlike before, every line is now contained in exactly two faces instead of one, so each line will have two directions respective to its two faces. For example, on the front face in our picture above, the bottom line is moving to the left. However, with respect to the bottom face, that same line needs to move to the right.

The construction for our orientation on the 3-simplex may seem a bit counter-intuitive; but — if you notice — it persists under rotation on any axis. Say we accidentally knock over our 3-simplex accidentally so that our right-most face is now the new bottom face. Despite the rotation, to orientations have changed!

"But hold on," you might say. "I remember the left-hand rule from physics, and there were no conflicting rotations." Yes — there were. The only orientation you really cared about (for the sake of, say, torque) was on the \(xy\)-plane, because your position and force vectors existed on that plane. However, that orientation on the \(xy\)-plane additionally induced an orientation on the \(xz\) and \(yz\)-planes. Depending on which way you took the cross-product, these two orientations told you the direction of your torque vector \(\tau\). For the record, this is why the order of your cross-product mattered, since it led to conflicting directions along the \(z\)-axis.

A depiction of how the right-hand rule really represents orientations
The orientations induced by the left-hand rule

Hopefully we understand by this point that any \(n\)-dimensional space can have only 2 possible orientations. But what is an orientation formally? It makes my job a little bit harder to tell you that an orientation is not what you'd expect. To motivate our definition further, recall that we were able to rotate our 2-simplex above without changing the orientation. On the standard basis for \( \mathbb{R}^3 \), rotation by \( 90^\circ\) in a given direction corresponds to permuting the standard basis \( \mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\).

Given an \(n\)-dimensional vector space \(V\) and basis \((e_1, \dots, e_n)\), we can re-order order the elements of our basis in any way possible. For a vector space with one basis, we could simply define two equivalence classes based on signs of permutation. However, many vector spaces have more than one basis associated to them. For such vector spaces, each pair of bases \(a_1, \dots, a_n\) and \(b_1, \dots, b_n\) must have a transition matrix \([m_{ij}]\) connecting the two. Thus the only way to ideally split the collection of bases down the middle is by the sign of the determinant \( \det([m_{ij}]) \).

Formally, an orientation is an equivalence relation on the set of all bases for a vector space \(V\). Two elements (\(a_1, \dots, a_n\)) and (\(b_1, \dots, b_n \)) are equivalent if their transition matrix \([m_{ij}]\) has positive determinant.

So where do the manifolds come in? Well, for any point \(p \in M \), our tangent space \(T_pM\) has the nice property that its also a vector space! Thus, at each point \(p \in M\), we have an orientation \(\mu_p\) which globally gives us some function \(p \mapsto \mu_p\). A manifold \(M\) is said to be orientable if the map \(p \mapsto \mu_p\) is continuous.

Given a connected \(n\)-dimensional manifold \(M\) and continuous orientation \(\mu\), any locally constant orientation must be globally constant. Thus, our transition matricies cannot diminish to zero, so they must be nonsingular! Using this, we can define a nowhere-vanishing \(n\)-form by applying a \(C^\infty\)-bump function (a technical tool that I never explained) to local coordinates \(x^1, \dots, x^n\). I'm avoiding the proof because it is a bit more technical despite the fact that the general idea is quite simple. What mattes is that a manifold being orientable is equivalent to it having a top-dimensional differential form which is nowhere-vanishing. Just go with it (or don't).







Our next topic on the road to de Rham cohomology is the study of manifolds with boundary. Note that if every set in our manifold is homeomorphic to an open ball in Euclidean space, then there's no possible way for us to look over the edge of our manifold! We simply just keep getting infintismally closer and closer to the edge. Thus, our open sets must be homeomorphic to some structure other than the open sets in \(\mathbb{R}^n\) alone. To solve this issue, we define the half-open space \(\mathcal{H}^n\) to be the set \( \{ (x^1, \dots, x^n) \in \mathbb{R}^n \mid x^n \geq 0 \} \). In one dimension, \(\mathcal{H}^1 = [0, \infty)\), while in two dimensions, \(\mathcal{H}^2\) is the upper half plane:

A visual depiction of the half-plane
\( \mathcal{H}^2\) and \( \mathcal{H}^3 \)

You may notice that the set \( [0, \infty) \) is not actually open in Euclidean space — so how can we define open sets containing all of our elements (in particular neighborhoods of 0)? Well the topological axioms from my first post state that the whole space \( [0, \infty) \) must be open. In addition, the axioms state that the intersection of open sets must also be open, so something like \( [0, \infty) \cap (-1, 1) = [0, 1) \) must be open as well. Therefore, given a topological space \((X, \tau)\) and subset \(Y \subset X\), we define the subspace topology on \(Y\) to be \( \{ U \cap Y \mid U \in \tau \} \). Therefore, since \( \mathcal{H}^n \subset \mathbb{R}^n \) for all \(n\) we must have that there is a subspace topology induced on \(\mathcal{H}^n\).

Much like a regular manifold, we define a manifold with boundary to be a Hausdorff, second-countable topological space. However, instead of possessing an atlas of charts homeomorphic to \(\mathbb{R}^n\), a manifold with boundary has an atlas of charts homeomorphic to \(\mathcal{H}^n\). Therefore, elements along the boundary of our manifold must translate onto some hyperplane with \(x^n = 0\).

Topologically, we define the boundary of a set \(U \subset X\) to be the set of all points \(p \in U\) such that every neighborhood of \(p\) contains a point in \(X - U\) and \(U\) simultaneously. That is, for any point \(p \in \partial U\), no matter how small of an open ball around \(p\) we take, it must contain both a point inside our set and outside our set.

A visual interpretation of what it means for a point to be a border point
A point along the boundary of \( \mathcal{H}^2 \)

The boundary of a set \(U\) is typically denoted \(\partial U\) (I will explain later why we use the notation for tangent vectors).

It should be clear that the boundary of \( \mathcal{H}^n \) is the set \( \partial \mathcal{H}^n = \{ (x^1, \dots, x^n)\mid x^n = 0\} \cong \mathbb{R}^{n-1} \). For example, notice that the boundary of \(\mathcal{H}^2\) (pictured above) is the \(x\)-axis, which is isomorphic to \(\mathbb{R}\). Similarly, the boundary of \(\mathcal{H}^3\) (pictured above) is simply the \(xy\)-plane, which is isomorphic to \(\mathbb{R}^2\).

Given an \(n\)-dimensional manifold \(M\), point \(p \in M\), and chart \( (U, \phi) \) containing \(p\), we say that a point \(p\) is a boundary point (denoted \(p \in \partial M\)) if \( \phi(p) \in \partial \mathcal{H}^n \).

Notice, however, that this would imply the boundary points map to \(\mathbb{R}^{n-1}\) (since it is homeomorphic to \(\partial \mathcal{H}^{n}\)). If we define a simple coordinate chart by \( (U, \phi|_{U \cap \partial M}) \), this would imply that \(\partial M\) is an \((n-1)\)-dimensional manifold without boundary (since it maps onto \(\mathbb{R}^{n-1}\) instead of \(\mathcal{H}^{n-1}\)). This will be an incredibly important concept later, but for the moment keep the fact that

$$ \partial \circ \partial = 0 $$

in the back of your head.







Our last topic before moving onto cohomology is the theory of integration. For any details of Riemannian integration, I'll refer the reader to Rudin's Principles of Mathematical Analysis chapters 6 and 10. The primary definition we care about is definition 10.11 which tells us that, given a \(k\)-form \( \omega = f(x) dx^{i_1} \wedge \dots dx^{i_k} \) over some open set \(U \subset \mathbb{R}^n\),

$$ \int_U \omega := \int_U f(x) dx^{i_1}\dots dx^{i_k} $$

which is basically what you'd expect. We require that the indices be strictly increasing due to the fact that \(dx \wedge dy = - dy \wedge dx\). Going back to that whole "orientation-is-equivalent-to-top-dimensional-form" business, this is the logic that explains why choosing the wrong orientation on your domain of integration gives the wrong sign in vector calculus.

To define an integral on our manifold, we simply steal any existing structure from \(\mathbb{R}^n\) using our coordinate chart \( (U, \phi) \), just like we did for differentiation. However, we're not actually translating points on \(M\) to points on \(\mathbb{R}^n\), but instead differential forms. Therefore, we're going to need that pullback that I defined earlier. But wait a second — if \(\phi: U \to \mathbb{R}^n\) maps onto Euclidean space and our pullback is a contravariant functor, we're going the wrong direction! Hence, it's not actually \(\phi\) we care about, but \(\phi^{-1}\)! This way, \( (\phi^{-1})^*\) is a linear map which takes \(k\)-forms on our manifold to \(k\)-forms on Euclidean space (which we know how to integrate by the formula above). Formally, given a coordinate chart \( (U, \phi) \) and \(k \)-form \( \omega \), we defnie the integral of \(\omega\) over \(U\) to be

$$ \int_U \omega = \int_{\phi(U)} (\phi^{-1})^*(\omega) $$

For example, consider the sphere \(S^2\). The sphere is a 2-dimensional manifold which can be represented by the coordinates \((\theta, \phi)\). For any coordinate neighborhood we choose, we have the chart \((\theta, \phi) \mapsto (x(\theta, \phi), y(\theta, \phi), z(\theta, \phi)) \) where

$$ \begin{align} x(\theta, \phi) &= \sin \phi \cos \theta \\ y(\theta, \phi) &= \sin \phi \sin \theta \\ z(\theta, \phi) &= \cos \phi \end{align} $$

Suppose we want to find the integral of the \(1\)-form \(\omega = \sin^2 \phi \, d\theta\) over some given set \(U \subset S^2\). Given the coordinates \( (x, y, z) \) for \(\mathbb{R}^3\), we have that

$$ \begin{align} \phi^*(dx) &= \cos \phi \cos \theta \,d\phi - \sin \phi \sin \theta \,d\theta \\ \phi^*(dy) &= \cos \phi \sin \theta \,d\phi + \sin \phi \cos \theta \,d\theta \\ \phi^*(dz) &= -\sin \phi \, d\phi \end{align} $$

I will omit the background calculations, as they only require elementary trigonometry; ultimately, we have

$$ (\phi^{-1})^*(\omega) = x\,dy - y\,dx $$

Therefore,

$$ \int_U \sin^2 \phi d\theta = \int_{\phi(U)} x\,dy - y\,dx$$

At this point, you may be thinking, "This is just the same mathematics that I learned in vector calculus — why do graduate students spend so much of their time with this?" I have two answers for this:

  1. The sphere \(S^2\) is a pretty simple manifold to work with. For starters, it's differential properties are inherited from \(\mathbb{R}^3\) more so than normal manifolds due the fact that it's an embedding (note, however, that the Whitney embedding theorem states that every smooth \(n\)-manifold can be embedded in \(\mathbb{R}^{2n}\)). However, as one begins to resarch the more heuristic manifolds studied in academia, performing calculations is no longer as trivial.
  2. The vast majority of vector calculus courses are not proof-based. Thus, instructors will often teach how to calculate the answer and nothing more.

The last thing I want to introduce in this section is Stokes' theorem:


Stokes' Theorem:
Let \(M\) be an oriented smooth \(n\)-dimensional manifold. For any smooth \((n-1)\)-form \(\omega\) with compact support, $$ \int_M d\omega = \int_{\partial M} \omega$$

I'm aware that I never discussed partitions of unity (which is a bit of a crime considering how often they're used in the study of manifolds), but the idea is that we only have to look at the local properties inside a coordinate chart.






De Rham Cohomology



With Stokes' theorem in our inventory, we are now curious whether a given differential form is the boundary of another differential form. In my opinion, it is easier to address this first in terms of curves on a manifold.

A surface of genus 3
Curves on the \(3\)-holed torus

Suppose we have some smooth manifold \(M\) and curve \(\gamma: [0, 1] \to M\). If we apply Stokes' theorem to solely the curve \(\gamma\), then

$$ \int_\gamma d\omega = \int_{\partial \gamma} f = f(\gamma(1)) - f(\gamma(0)) $$

for any \(0\)-form \(\omega = f(t)\). Thus, a line integral is independent of its path — this came to be known as the fundamental theorem of line integrals (also called the gradient theorem). This implies that, if there is no hole present, a closed curve has \(\int_\gamma d\omega = 0\). When a hole is present, however, we come upon topics in complex analysis like winding number which ensure \(f(\gamma(1)) \neq f(\gamma(0))\).

For example, consider the unit disk with a hole punctured in it:

A depiction of how a closed curve circumnavigating a hole has non-zero integral value
Two curves on the punctured disk

Consider the differential form \( \omega_1 = \frac{-y}{x^2 + y^2} dx + \frac{x}{x^2 + y^2}dy \) — this form effectively allows us to measure change in the swept angle \(\theta\). For any closed curve not containing the origin, we have that \(\int_\gamma \omega_1 = 0\). However, a closed curve wrapped around the origin will have

$$ \int_\gamma \omega_1 = 2n\pi $$

where \(n\) is called the winding number. Let me now introduce a few terms.

Formally, we say that a \(k\)-form \(\omega\) is closed if \(d\omega = 0\) and is exact is there exists some \((k-1)\)-form \(\tau\) such that \(\omega = d\tau\). In the example above, our differential form \(\omega_1\) is a closed form but not an exact form (the reader can check that there is no well-defined function \(\theta(x, y)\) on the punctured disk such that \(d\theta = \omega_1\)).

Let \(Z^k(M)\) denote the set of all closed \(k\)-forms on \(M\) and \(B^k(M)\) denote the set of all exact \(k\)-forms on \(M\). From the first theorem in the first section, we have that \(d \circ d = 0\) so every exact form must also be closed. Hence, \(B^k(M) \subset Z^k(M)\) for all \(0 \leq k \leq n\). You will sometimes see\(B^k(M)\) denoted as the set of "boundaries" and \(Z^k(M)\) denoted as the set of "cycles".

In the theory of classical mechanics, suppose we have some \(1\)-form \(\alpha = f(x, y)\,dx + g(x, y)dy\). Then

$$ \begin{align} d\alpha &= \frac{\partial f}{\partial x} dx \wedge dx + \frac{\partial f}{\partial y} dy \wedge dx + \frac{\partial g}{\partial x} dx \wedge dy + \frac{\partial g}{\partial y} dy \wedge dy \\&= \left( \frac{\partial g}{\partial x} - \frac{\partial f}{\partial y} \right) dx \wedge dy \end{align} $$

Therefore, if \(\alpha\) is closed, then \( \frac{\partial g}{\partial x} = \frac{\partial f}{\partial y} \). Similarly, if \(\alpha\) is exact, then there exists some smooth function \(h(x, y): \mathbb{R}^2 \to \mathbb{R}\) such that \(\alpha = dh = \frac{\partial h}{\partial x}dx + \frac{\partial h}{\partial y}dy \). Moreover, since exact forms are closed, this would imply that

$$ \frac{\partial^2 h}{\partial x \partial y} = \frac{\partial^2 h}{\partial y \partial x} $$

Therefore, Schwarz's theorem on the symmetry of second derivatives is equivalent to a \(1\)-form being exact!

We define the de Rham Cohomology group of degree \(k\) to be the equivalence class

$$ H_{dR}^k(M) := Z^k(M) / B^k(M) = \ker d^k / \IM d^{k-1} $$

Where \(d^r\) is simply the restriction of the map \(d\) to \(r\)-forms. The elements of \(H_{dR}^k(M)\) are the equivalence classes \( [\omega] \) defined by the relation

$$ \omega \sim \omega' \Leftrightarrow \exists \tau \in \Omega^{k-1}(M), \, \omega' = \omega + d\tau $$

It is common to write \(H_{dR}^k(M) = 0\) (or the \(k^{\textrm{th}}\) cohomology group is trivial ) whenever \( Z^k(M) = B^k(M) \). Note that for any manifold \(M\), there are no exact \(0\)-forms (since there is not such thing as a \(-1\)-form) — therefore \(H_{dR}^0(M) = Z^0(M)\). Similarly, for any \(n\)-dimensional manifold, \(H_{dR}^k(M) = 0\) for all \(k \gt n\). This is due to the fact that, given some \(n\) form \(\omega\), \(\omega\) must have the representation

$$ \omega = f(x^1, \dots, x^n) x^1 \wedge \dots \wedge x^n $$

Thus,

$$ d \omega = \sum_{i=1}^n \frac{\partial f}{\partial x^i} dx^i \wedge dx^1 \wedge \dots \wedge dx^n $$

However, the covector \(dx^i\) is already included in the wedge product \(dx^1 \wedge \dots \wedge dx^n\), which means it must cancel out and evaluate to \(0\). Therefore, there cannot exist any \((n+1)\)-forms besides the zero form, so \(Z_{dR}^{n+1}(M) = \{ 0 \} = B_{dR}^{n+1}(M)\). The same logic applies to higher dimensions.

A common example for beginners is to find the cohomology group \(H_{dR}^1(S^1)\). Since there are no higher dimensional forms, $$Z^1(S^1) = \Omega^1(S^1) = \{ f(t)\,dt |\ f \in C^\infty(S^1) \}$$

I'll go ahead and introduce a helpful lemma to make this easier:

Lemma:
Let \(M\) is a connected, orientable \(n\)-manifold without boundary and \(\omega\) an \(n\)-form. If \(\omega\) is exact, then \(\int_M \omega = 0\).

Since our circle is a connected, orientable \(1\)-manifold without boundary, the lemma above tells us that

$$ B^1(S^1) = \left\{ \alpha \mid \int_{S^1} \alpha = 0\right\} $$

Therefore,

$$ H_{dR}^1(S^1) = Z^1(S^1) / B^1(S^1) \cong \mathbb{R} $$

since \(\frac{1}{2\pi}\int_{S^1} \sigma\) can take any value in \(\mathbb{R}\) for non-exact \(1\)-froms \(\sigma\). Utilizing the two other facts that I provided prior to this example, we holistically have

$$ H_{dR}^k(S^1) = \begin{cases} \mathbb{R} & k = 0 \\ \mathbb{R} & k = 1 \\ 0 & k \geq 0 \end{cases} $$

There will be a much easier way to compute this when I make it to the Mayer-Vietoris sequence. For the moment, however, it's worth building up some structure on our cohomology groups. Consider the following lemma:

Lemma:
Let \(F: M \to N\) be a smooth map between manifolds. Then the pullback \(F^*: \Omega^*(N) \to \Omega^*(M)\) preserves closed and exact forms.

Therefore, using a smooth map between manifolds, we can translate cohomology classes from one manifold to another! Given a smooth map between manifolds \(F: M \to N\), we define the cohomology pullback map \( F^\# : H_{dR}^k(N) \to H_{dR}^k(M) \) by \( [\omega] \mapsto [ F^*\omega ] \). The lemma above tells us that this map preserves cohomology types. But how well-structured is \(H_{dR}^k(M)\) algebraically? Well, according to this next theorem, if you think about the wedge product as multiplication then it gives us a graded ring:

Theorem:
Given an \(n\)-dimensional manifold \(M\), \(H_{dR}^*(M) = \bigoplus_{k=0}^n H_{dR}^k(M)\) forms a ring.

You will sometimes see the wedge produce between cohomology classes called the cup product, denoted by \([\omega] \smile [\tau] = [\omega \wedge \tau]\). The notation is whichever you prefer, but preserving the wedge symbol seems a little more intuitive to me.








The last set of material (which may be a bit lengthy) that I want to introduce starts off with something called cochain complexes (I know I said you wouldn't have to worry about category theory, but look how far you've come). Let \( \mathcal{C} = \{C^k, d^k \}_{k \in \mathbb{Z}} \) be a collection of objects \(C^k\) and morphisms \(d_k: C^k \to C^{k+1}\) in an abelian category \(\mathcal{A}\). If \(d_k \circ d_{k-1} = 0\), then we call

$$ \dots C^{-1} \xrightarrow{d_{-1}} C^0 \xrightarrow{d_0} C^1 \xrightarrow{d_1} C^2 \xrightarrow{d_2} \dots $$

a cochain complex. The elements \(c \in C^k\) are often referred to as \(k\)-cochains.

Recall that the kernel of a function \(f : X \to Y\) is the set \(\ker f = \{ x \in X | f(x) = 0 \}\) and its image is \(\IM f = \{ f(x) | x \in X \}\). Given \(i \in \mathbb{Z}\) and \(j \gt i + 1\), we say that a sequence of morphisms

$$ C^i \xrightarrow{d_i} C^{i+1} \xrightarrow{d_{i+1}} \dots \xrightarrow{d_{j-2}} C^{j-1} \xrightarrow{d_{j-1}} C^j $$

is exact if \( \IM d_{k} = \ker d_{k+1} \) for all \(i \leq k \leq j-1\).

To get you all nice and warmed up to the new kind of mathematics we'll be doing the remainder of this blog post (commonly known as "diagram chasing"), I'll start off with an easy lemma:

Lemma:
A linear morphism \(d: A \to B\) between objects in an abelian category is an isomorphism if and only if we have the following exact sequence: $$ 0 \rightarrow A \xrightarrow{d} B \rightarrow 0 $$

Let me now introduce a new definition which many readers may have not seen before (unless you study algebra) - the cokernel. Given a morphism \(f: A \to B\), we define the cokernel of \(f\), denoted \(\text{coker}\, f\), to be

$$ \text{coker}\, f := B / \IM (A) $$

elements of \(\text{coker}\,f\) are equivalence classes \([b]\) defined by the relation

$$ b \sim b' \Leftrightarrow \exists a \in A,\ b = b' + f(a)$$

This brings us to another helpful lemma:

Lemma:
If \( A \xrightarrow{f} B \xrightarrow{g} C \rightarrow 0 \) is an exact sequence, where \(f\) and \(g\) are linear morphisms, then there is a linear isomorphism \(C \cong \text{coker}\,f\)

Suppose we have some cochain cycle

$$ \dots C^{-1} \xrightarrow{d_{-1}} C^0 \xrightarrow{d_0} C^1 \xrightarrow{d_1} C^2 \xrightarrow{d_2} \dots $$

We say that a \(k\)-cochain \(c \in C^k\) is a \( \underline{k}\)-cocycle if \(d_k(c) = 0\) and is a \(\underline{k}\)-coboundary if there exists some \(b \in C^{k-1}\) such that \( d_{k-1}(b) = c\). We often deonte the set of \(k\)-cocycles as \(Z^k(\mathcal{C}) = \ker d_k\) and the set of \(k\)-coboundaries as \(B^k(\mathcal{C}) = \IM d_{k-1}\).

See where this is going..?

We define the cohomology class degree \(k\) of \(\mathcal{C}\) to be the quotient space

$$ H^k(\mathcal{C}) := Z^k(\mathcal{C}) / B^k(\mathcal{C}) $$

We didn't really need an extra level of generality here (that is, you could probably prove the snake lemma specifically for de Rham cohomology). However, I think that the generalization of cohomology makes the proofs much more elegant since they now apply to simplical cohomology, singular cohomology, etc.

But what happens when we have two cochain complexes, say \(\mathcal{A} = \{ A^k, d_k\}\) and \(\mathcal{B} = \{ B_k, d'_k \}\), and want to relate the two? Well, as long as the morphisms in each step commute with our differentials \(d\) and \(d'\), we can produce a cochain map! Formally, we define a cochain map \(\Phi : \mathcal{A} \to \mathcal{B}\) to be a collection of linear morphisms \(\{ \phi_k : A^k \to B^k \}\) such that \( \phi \) commutes (i.e. \( \phi_{k+1} \circ d_k = d'_k \circ \phi_k\))

$$ \require{AMScd} \begin{CD} A^{k+1} @>{\phi_{k+1}}>> B^{k+1} \\ @A{d_k}AA @A{d'_k}AA \\ A^k @>{\phi_k}>> B^k \end{CD} $$

Note that we require our morphisms \(\phi_k : A^k \to B^k\) to be linear so that the induced map \( \Phi_k^*: H^k(\mathcal{A}) \to H^k(\mathcal{B})\) defined by \( [a] \mapsto [\phi_k a] \) is well-defined. To show that \(\Phi^*\) is well-defined, it suffices to show that \(\Phi\) maps cocycles to cocycles and coboundaries to coboundaries. For the first, note that if \(a \in Z^k(\mathcal{A})\), then \(d_k a = 0\). Since we require each \(\phi_k\) to be linear and the diagram to commute, this gives us \(d'(\phi_k(a)) = \phi_{k+1}(d_k(a)) = \phi_{k+1}(0) = 0\)

$$ \require{AMScd} \begin{CD} 0 @>{\phi_{k+1}}>> 0 \\ @A{d^k}AA @A{d'_k}AA \\ a \in Z^k(\mathcal{A}) @>{\phi_k}>> \phi_k(a) \in Z^k(\mathcal{B}) \end{CD} $$

\(\Phi\) automatically maps coboundaries to coboundaries since our differentials commute (i.e. \(\phi_{k+1}(d_k a) = d_k'(\phi( a))\) ). Linearity is not required for this property.

You may ask yourself, "Where will I ever need a cochain map? The only chain complex I know of is the de Rham complex." Fortunately, that is all you need to know! Since each manifold produces its own de Rham complex, functions between manifolds give us cochain maps! Suppose \(M\) and \(N\) are smooth manifolds, and \(F: M \to N\) is smooth. Then \(F^\#: H_{dR}^*(N) \to H_{dR}^*(M) \) is our cochain map where each linear morphism is our pullback \(F^* : \Omega^*(N) \to \Omega^*(M)\) restricted to \(k\)-forms.

It turns out that we can create sequences of cochain maps as well! If you think about it, this is kind of like a \(2\)-dimesnional cochain complex 😳

$$ \require{AMScd} \begin{CD} @. \vdots @. \vdots @. \vdots \\ @. @A{d_{k+1}}AA @A{d'_{k+1}}AA @A{d_{k+1}''}AA @. \\ \dots @>>> A^{k+1} @>{\phi_{k+1}}>> B^{k+1} @>{\psi_{k+1}}>> C^{k+1} @>>> \dots \\ @. @A{d_k}AA @A{d'_k}AA @A{d_k''}AA @. \\ \dots @>>> A^k @>{\phi_k}>> B^k @>{\psi_k}>> C^k @>>> \dots \\ @. @A{d_{k-1}}AA @A{d'_{k-1}}AA @A{d_{k-1}''}AA @. \\ \dots @>>> A^{k-1} @>{\phi_{k-1}}>> B^{k-1} @>{\psi_{k-1}}>> C^{k-1} @>>> \dots \\ @. @A{d_{k-2}}AA @A{d'_{k-2}}AA @A{d_{k-2}''}AA @. \\ @. \vdots @. \vdots @. \vdots \\ \end{CD} $$

"Why are these useful?", you might ask. Turns out that there is a particular sequence of cochain maps that is exceptionally important in defining the Mayer-Vietoris sequence.

Formally, let \(\mathcal{A}\), \(\mathcal{B}\), and \(\mathcal{C}\) be cochain complexes, and let \(i : \mathcal{A} \to \mathcal{B}\) and \(j: \mathcal{B} \to \mathcal{C}\) be cochain complexes. If

$$ 0 \rightarrow A^k \xrightarrow{i_k} B^k \xrightarrow{j_k} C^k \rightarrow 0 $$

is exact for each \(k\) then we call the sequence

$$ 0 \rightarrow \mathcal{A} \xrightarrow{i} \mathcal{B} \xrightarrow{j} \mathcal{C} \rightarrow 0 $$

a short exact sequence. It's now time for me to introduce an incredibly important lemma (which really should be more of a theorem but its main use is to make the proof of Mayer-Vietoris more straightforward, so who knows).


The Zig-Zag Lemma:
If $$ 0 \rightarrow \mathcal{A} \rightarrow \mathcal{B} \rightarrow \mathcal{C} \rightarrow 0 $$ is a short exact sequence and each differential morphism is linear, then there is a long exact sequence
How the ZigZag Lemma gets its name

It should be relatively clear from the cohomology diagram why the lemma is named the way it is. This next lemma isn't actually necessary for me to explain the Mayer-Vietoris sequence, but I figure I may as well provide it for fun:


A depiction of the Snake Lemma using a cartoon drawing of the 'Dont Tread on Me' flag
The snake lemma, more commonly known (to me) as the no step on snek lemma
The Snake Lemma:
A commutative diagram with linear morphisms \(\alpha, \beta, \gamma\) and exact rows $$ \require{AMScd} \begin{CD} 0 @>>> A^1 @>{i_1}>> B^1 @>{j_1}>> C^1 @>>> 0 \\ @. @A{\alpha}AA @A{\beta}AA @A{\gamma}AA @. \\ 0 @>>> A^0 @>{i_0}>> B^0 @>{j_0}>> C^0 @>>> 0 \end{CD} $$ induces a long exact sequence
The Snake Lemma

LONG. H*CKIN. PROOFS.


But the feeling one gets after proving a long theorem such as this is like crack to grad students, so I couldn't bear to not include the proof.







I know — you must be upset that I broke my word and threw a little category theory / homological algebra in there, but it puts hair on your chest. Anyways, we can step down a level of generality or two and retrace our steps back to de Rhram cohomology (which is still pretty damn abstract, but an improvement from where we just were). Notice that our exterior derivative \(d: \Omega^*(M) \to \Omega^*(M)\) satisfies all the properties required in cochain complexes (which is basically just linearity and commutativity with pullbacks), so we don't really need to check hypotheses when it comes to using the zig-zag lemma or snake lemma.

So how do we apply what we just learned to manifolds 🧐? Sure, we could somehow find the pullback of smooth mappings between three separate manifolds \(M, N\) and \(P\), but those don't come by as often as you would think. But what about submanifolds? What if — instead of devising some elaborate map from one manifold to another — we simply consider the inclusion map \(i_U: U \to M\) defined by \(i_U(p) = p\) for some subset \( U \subset M\)?

Turns out this is quite useful specifically when we have a manifold that can be covered by two submanifolds, call them \(U\) and \(V\). Note that since our inclusion map is defined as \(i_U: U \to M\), the pullback is a map

$$ i^*_U : \Omega^*(M) \to \Omega^*(U) $$

Given two subsets \(U, V \subset M\), we define a joint inclusion \(i: \Omega^*(M) \to \Omega^*(U) \oplus \Omega^*(V) \) by

$$ i(\omega) = (i^*_U \omega, i^*_V \omega) $$

pretty simple.

Next, suppose we have another layer of inclusion map: \(j_U: U \cap V \to U\) and \(j_V: U \cap V \to V\). We define the difference map \(j: \Omega^*(U) \oplus \Omega^*(V) \to \Omega^*(U \cap V)\) by

$$ j(\omega, \tau) = j^*_V \tau - j^*_U \omega $$

If we apply the exterior derivative in the canonical way (i.e. \(d(\omega, \tau) = (d\omega, d\tau)\)), then \(d\) simply inherits its commutativity from each component. Thus, it remains to check whether the sequence

$$ 0 \rightarrow \Omega^*(M) \xrightarrow{i} \Omega^*(U) \oplus \Omega^*(V) \xrightarrow{j} \Omega^*(U \cap V) \xrightarrow{} 0$$

is exact.


That said, thank you guys for reading and I hope you enjoyed! 😁