When someone uses the term 'theoretical mathematics', the first thing that pops into many people's minds is either a blackboard riddled with dozens of equations or the idea of some higher-dimensional amorphous shape. The first idea could not be more true, but the second would be a bit stereotypical for mathematicians since it typically describes manifolds.

So what really is a manifold? A topological manifold of dimension \(n\) (also called an \(n\)-manifold) is a topological space \(X\) such that:

- \(X\) is locally Euclidean
- \(X\) is Hausdorff
- \(X\) is second-countable

I realize that probably 80% of the words in that definition mean absolutely nothing to the average reader, so I decided it would be a good idea to dedicate the first post in this series to explaining the basics. Now it turns out that basically every technical term in that definition is part of a branch of mathematics known as topology. Therefore, the process of explaining the basics is essentially familiarizing the reader with topology.

For mathematicians and physicists, the significance of a topology is that it generalizes the concept of what open sets and continuity are over an arbitrary space. To shed a little light on what I'm getting at: what exactly makes the set \( (-1, 1) \) open? Moreover, what makes the set \( [-1, 1] \) closed? Is the set \( \{1, 2\} \) (i.e. the set containing only the numbers 1 and 2) open, closed, or neither? Unfortunately, the vast majority of the population would say the first is open because it's surrounded by parentheses and the second is closed because it is surrounded by brackets (I'm not really sure what the general population's consensus is for the third, but you'll find out if you manage to retain interest throughout the article).

So why do you or should you care about the study of open sets? Well for the first part, you probably don't (yet), but for the second part my answer is this: elementary topology by itself is a unique area of mathematics whose logic dances between the elegance of algebra and the tenacity of analysis. Alone, elementary topology seems like a trial-and-error of reasoning to deduce the greatest structure from the fewest assumptions (well, that's actually all of pure mathematics, but this blog post is simply focusing on one branch). However, with a little formation, topology leads to paramount subjects such as algebraic topology, differential geometry, and algebraic geometry, which are crucial to understanding advanced topics in relativity and even string theory.

I would love the opportunity to discuss string theory and enumerative geometry with you all, but it's going to be a bit of a long road to get there. Topology must come before manifolds, manifolds are required before relativity, somewhere along the way there should be a discussion of Chern classes, and after that — well, who's really reading at that point? This is likely going to be a series of posts for this blog, so when it comes to answering questions I may choose to answer some in the comments while answering others in subsequent posts.

For those of you who are truly acquainted with the subject: feel free to speak up and correct me where I'm wrong. For those of you who have no experience in the subject: everyone who ever told you there's no such thing as a stupid question was probably born before search engines became relevant (seriously though, Google first then ask — I'm a 22 year-old graduate student not a professor).

Most of the content I plan to talk about later in this article will likely be unfamiliar to the reader; however, in order to correctly introduce such topics, I must first delve deeper into a concept that many people know well: infinity. The reason I say concept and not number is because that's exactly what it is — infinite is an adjective which describes asymptotic behavior for functions (however, an infinite set is defined to be a set with a proper subset having the exact same cardinality).

It's unfortunate reality that many high school teachers or freshman calculus professors introduce infinity as if it were a number through notations like \( \lim_{x \to \infty} \sqrt{x} = \infty \). The equals sign in the equation often leads people to believe that infinity can be treated numerically, and thus interchanged with other numbers in many equations. This is not the case for \(\mathbb{R}\). A more accurate notation that I prefer instead is \(\sqrt{x} \to \infty\) as \(x \to \infty\), since it emphasizes the fact that one function *trends* along an asymptote.

As a brief aside, one can bypass the conceptual behavior of infinity described above through adopting the extended real line, \( \overline{\mathbb{R}} = \mathbb{R} \cup \{-\infty, \infty\}\), or the Riemann sphere \( \overline{\mathbb{C}} = \mathbb{C} \cup \{ \infty\}\). For both of these sets, the symbol \(\infty\) no longer describes asymptotic behavior, but an actual element of the ordered field satisfying \(x \lt \infty\) for all \(x \in \mathbb{R}\). In this case, arithmetic operations such as \( x \cdot \infty = \infty \) and \(\frac{x}{0} = \infty\) are well-defined. It is a bit difficult for me to answer why, in general, a high school or college student learning calculus should not simply always use the extended real line. The extended real line is widely accepted in measure theory since many sets seem to naturally have 'infinite' volume, while the Riemann sphere is the primary set of focus in complex analysis. I can say, however, that the general topology student should not get \(\mathbb{R}\) and \(\overline{\mathbb{R}}\) confused — one is compact and one is not.

Though only a concept at this point, there must be some sort of way for us to tell which sets are 'infinitely more infinite' than others; the set \(\mathbb{R}\) is clearly larger than the set of whole numbers \(\mathbb{N}\), yet both are infinite. In a sense, we need some sort of number system to keep track of our infinities. This is where cardinality comes in.

For finite sets, cardinality is simply a natural number no different from the size of a set. For infinite sets, however, it no longer makes sense for us to count upwards to determine the number of elements. Instead, what mathematicians use are bijections. Think about it — if a function \(f: X \to Y\) is surjective, every element of the codomain \(Y\) has a preimage in \(X\), so the cardinality of \(X\) must be greater than or equal to the cardinality of \(Y\). On the other hand, if a function is injective, no two distinct elements of the domain \(X\) map to the same element in \(Y\), so the cardinality of \(Y\) must be greater than or equal to that of \(X\). Therefore, if we can find a bijection between two sets, then we know the sets must have equal cardinality.

By convention, the baseline cardinality for infinite sets is the cardinality of the natural numbers \(\mathbb{N} = \{1, 2, 3, \dots\}\), denoted by the character \(\aleph_0\) (pronounced aleph null). If a set is finite or has a bijection with \(\mathbb{N}\), it is said to be countable. For example, take the integers \(\mathbb{Z} = \{\dots, -2, -1, 0, 1, 2, \dots\}\). Then we can easily find a bijection \(f(n): \mathbb{N} \to \mathbb{Z}\) defined by \(n \mapsto (-1)^n \lfloor n/2 \rfloor\), which basically just bounces ad infinitum in both directions (i.e. \(0, 1, -1, 2, -2, 3,\dots\)).

Another example that is important for me to discuss is the countability rational numbers \(\mathbb{Q}\) (i.e. numbers which can be represented as fractions). To start off, consider \( \mathbb{N} \times \mathbb{N} \) and think of the first coordinate as the numerator and the second coordinate as the denominator.

If we try to enumerate the positive rationals starting with the first row and going to the right in anticipation of a snake pattern (i.e. winding back around at the end of the row), we will never make it to the second row! In fact, it is impossible to tackle the entirety of \(\mathbb{N} \times \mathbb{N}\) with any sort of snake pattern, as tackling one row / column at a time will exhaust our infinite process.

In the late nineteenth century, the father of set theory, Georg Cantor, devised a way to traverse the positive rationals without getting lost in the infinity of any one row or column: proceed along the diagonal lines connecting the coordinates \( (n, 1) \) to \( (1, n) \) for each \(n \in \mathbb{N}\). Although the sizes of the diagonal lines grow linearly each iteration, for a fixed \(n \in \mathbb{N}\) the length is always finite; therefore, we are able to cover \(\mathbb{N} \times \mathbb{N}\) as \(n \to \infty\).

Now if we combine this logic with the countability of the integers, the transitive property tells us that there exists a bijection from \(\mathbb{N}\) to \(\mathbb{Z} \times \mathbb{N}\). This is effectively all we need, as the rationals are the set of equivalence classes over \(\mathbb{Z} \times \mathbb{N}\) (i.e. notice how the diagonal elements along \( (n, n) \) are all the same number equal to \(1\)).

The last thing I want to do is show that the continuum, \(\mathfrak{c}\), has greater cardinality than \(\aleph_0\). This problem was also solved by Georg Cantor in the late nineteenth century. Instead of even considering the full set of real numbers \(\mathbb{R}\), simply consider the set of infinite binary expansions of decimals in \( [0, 1] \). If we can show that the set \( [0, 1] \subset \mathbb{R} \) is larger than \(\mathbb{N}\), then obviously the result will follow for \(\mathbb{R}\).

Before considering the case of infinite binary decimal expansions, think about \(n\) distinct binary decimals that terminate after \(n\) digits. If we line them all up, take the \(i^{th}\) bit from the \(i^{th}\) decimal, and negate it (i.e. \(0\) becomes \(1\) and \(1\) becomes \(0\)), then we have created a completely new decimal! Whenever we negate a single bit from one of our decimals, no matter what our new number is it must be distinct from the decimal we just negated. Since we are negating information from each and every one of our decimals available, the new decimal is distinct from all others.

The idea for the infinite case is the same as it is for the finite case: consider the infinite collection of all infinite decimal expansions. If we take the \(n^{th}\) bit from our \(n^{th}\) decimal, negate it, and proceed as \(n \to \infty\), then the resulting decimal could not possibly be contained in the original collection we were looking at. Therefore, every attempted enumeration of the infinite binary decimal expansions will fail to cover all of \([0,1]\), and thus \(\mathbb{R}\) is uncountable.

I will not show the proof here, but it turns out that the power set (set of all subsets) of \(\mathbb{N}\) has a bijective correspondence with \(\mathbb{R}\). For a finite set \(X\) with \(n\) elements, the cardinality of the power set of \(X\), \(\mathcal{P}(X)\), is exactly \(2^n\); therefore, it is common to see the continuum denoted by \(\mathfrak{c} = 2^{\aleph_0}\). This brings us to one of the most famous unsolved math problems in the world today: the continuum hypothesis. Originally introduced in 1878 by our friend Georg Cantor, the continuum hypothesis states that there does not exist any set whose cardinality is strictly between \(\aleph_0\) and \(2^{\aleph_0} = \mathfrak{c}\). That's pretty cool, huh? Through some rudimentary set theory tricks and a few diagrams, we have proved that the philosophical idea of being 'infinitely more infinite' corresponds to the mathematical idea of power sets and exponentiation.

The premiere of topological spaces (as they are defined today) became a large area of interest around the mid-ninteenth century, when mathematicians such as Gauss and Riemann built upon Euler's earlier studies on surfaces. This work would eventually spark a huge interest, leading to a vast amount of research on homology and cohomology groups in the mid-twentieth century by mathematicians such as Heinz Hopf, Armand Borel, and Frank Adams.

The idea of a topological space was conspired to be an archetype of Euclidean space, so that there is just enough structure to support continuous functions; that is, a topological space was constructed to be the bare minimum and crux of continuity. Before I go any further, this seems like an appropriate time to introduce an important theorem:

To give a brief aside for those of you who are not familiar with theoretical mathematics: there is no substantial reason for you to look at the proofs or become too worried when you do not understand the mechanics of a proof; however, proofs are the mortar and pestle of higher-level mathematics. The beauty of theoretical mathematics lies in assuming the least amount of structure and deducing staggering truths, which ultimately are innate to the laws of everyday logic. Sure, you'll likely admire the finished product by the end of the blog series — but you will not appreciate the work that was put into building it.

Addressing the proof above, I withheld an important property of open sets in Euclidean space: a set \( U \) is open if and only if for every point in that set, there exists a ball of positive radius centered at that point also contained in the set. For example, take the set \( (0, 1) \). You could pick a small number incredibly close to \(0\), say \( 10^{-1000} \); yet, I can always pick a radius even smaller, say \( \frac{1}{3}\cdot 10^{-1000} \), and the ball of such radius centered at \( 10^{-1000} \) is still contained in \( (0, 1) \) since it's boundary is \( \frac{2}{3}\cdot 10^{-1000} \) away from \(0\). This is the idea of open sets in Euclidean space that the average reader is probably familiar with — * I can creep as close to the boundary as I want, but I will never be able to look off the edge*.

Alright, now it's time to formally define a topology. Suppose we have some mathematical set \( X \). At it's pure definition, a topology \( \tau \) is a collection of subsets of \( X \) which satisfy the three topological axioms:

- The null set, \( \emptyset \), and the entire set, \( X \), are both elements of \( \tau \).
- The arbitrary union of (i.e. combination of however many) sets in \( \tau \) is also in \( \tau \)
- The intersection of finitely many elements in \( \tau \) is in \(\tau\)

(if you were to actually look the third property up in most textbooks it would only say the intersection of two sets, but finitely many is a direct result under induction). A set \(V \subset X\) is said to be __open__ if \(V \in \tau\). Lastly, to generalize the theorem above, a function \(f: X \to Y\) between two topological spaces \((X, \tau_X)\) and \((Y, \tau_Y)\) is said to be continuous if for every \(U \in \tau_Y\), the preimage \(f^{-1}(U)\) is an element of \(\tau_X\).

For example, say we have two spaces \(X = \{a, b, c \} \) and \( Y = \{ d, e, f \} \), along with topologies assigned to each of them \( \tau_X = \{ \emptyset, \{ a \}, \{ b \}, \{ a, b \}, X \} \) and \( \tau_Y = \{ \emptyset, \{ e \}, Y \} \) (note that both these topologies satisfy the topological axioms). Then every mapping \(h: X \to Y \) is continuous except when \( h^{-1}(e) = \{c\} \), \( h^{-1}(e) = \{b, c\} \), or \( h^{-1}(e) = \{a, c \} \) (note that if no element in \(X\) maps to \(e\) then we are fine, since that would mean \(h^{-1}(e) = \emptyset\) which is also open).

So how does this new definition of discontinuity fit into our old definition of discontinuity? Consider the typical unit jump function \(f: \mathbb{R} \to \mathbb{R}\) defined by

$$ f(x) = \begin{cases}0, & x \lt 0 \\ 1, & x \geq 0 \end{cases} $$Take the open ball of radius \(0.5\) centered at \(1\), \(B_{0.5}(1)\). If you visualize this open set lying on the vertical axis (since we want to consider this open set in the codomain, not the domain), then the preimage is \(f^{-1}\big(B_{0.5}(1)\big) = [0, \infty)\) which is not an open set.

Alright, so now that you are seeing that open sets are pretty much whatever you define them to be (as long as the topology satisfies the three axioms), where do closed sets fit into all this? Simple: the complement of an open set is a closed set. Referring to the example above with \( X = \{a, b, c \} \) and \( \tau_X = \{ \emptyset, \{ a \}, \{ b \}, \{ a, b \}, X \} \), we just need to take the complement of every open set: $$ \{ X - \emptyset, X - \{ a \}, X - \{ b \}, X - \{a, b\} , X - X\} \\= \{ X, \{ b, c \}, \{ a, c \}, \{ c \}, \emptyset \} $$ But hold on — this would mean that that the entire set \(X\) and the empty set \( \emptyset \) are both open and closed, which doesn't make sense whatsoever! That would be correct. To make sense of it, a nice property of closed sets is that sequences which converge inside a closed set must also have their limit point contained in that set. Consider the real numbers \( \mathbb{R} \), which the everyday reader has been working with their whole life. If you have a sequence in \( \mathbb{R} \) and you know that sequence converges, then obviously the limit point is going to be in \( \mathbb{R} \) also; hence, \( \mathbb{R} \) must be closed. However, \( \mathbb{R} \) also has the property that I mentioned before where I can try to creep close to the edge, but can never look over (i.e. I can always find an open set surrounding me). Therefore, \( \mathbb{R} \) must also be open.

Now let me introduce the idea of a separation: imagine we took the real number line \( \mathbb{R} \) and cut it in half at some point \( x \) — what is left over could then be represented as the union of two open sets, i.e. \( (-\infty, x) \cup (x, \infty) \). We went from a connected line to a disconnected line, and the only thing that changed is that we could represent the entire space as the union of two nonempty open sets. This is exactly the generalization of a space being connected or disconnected: a space is disconnected if there exists a separation (i.e. the space can be represented as the union of two nonempty open sets which do not intersect). A space is connected if it is not disconnected (big surprise there).

Pulling the two previous paragraphs together brings us to an important theorem:

Personally, I think that's pretty cool; we started out with only three axioms for what a topological space is, and we built up enough to define what it means for any space to be connected. I just want to introduce a few more theorems about connectedness and then we'll be good to move onto the next section:

We've gone over almost ALL elements of this proof, except a very subtle bit of elementary set theory that I snuck in there: the preimage of a union is the union of preimages, the preimage of an intersection is the intersection of preimages, and the image of a union is the union of an image (however, it is *not* true that the image of an intersection is the intersection of images).

At this point, we surprisingly have enough information to prove the Intermediate Value Theorem for real valued functions (you could technically generalize the IVT a little more so that it maps into a totally ordered set endowed with the order topology, but the proof is literally the same).

We now introduce a fundamental concept in the study of topology: a homeomorphism. The definition of a homeomorphism is fairly straightforward: it is a continuous bijection with continuous inverse. To break it down, recall how a continuous function has the property that the inverse sends open sets to open sets; well, since the inverse is continuous also that means that the function sends open sets to open sets both ways. Moreover, since the function is a bijection, we know that we have a nice pairing between open sets! In other words, a homeomorphism says that two topological spaces essentially have the same structure.

It is now time for us to introduce our last theorem on connectedness:

And there you have it, basically all you've ever wanted to know about what makes a space connected and where that gets you. Later on (not in this post) I'll introduce the concept of path connectedness for the sake of the fundamental group and homology; though this is not a difficult idea per se, the realm of homology certainly requires a solid background in elementary topology.

This is a somewhat hard concept to introduce, but it would be almost impossible for me to go forward in the study of topology without it. Consider a singleton element \(\{ a \}\) in \(\mathbb{R}\). Recall that a set is closed if its complement is open. Well, the complement of \(\{ a \}\) is simply the set \( (-\infty, a) \cup (a, \infty) \). Since the union of open sets is open by our topological axioms, we see that a singleton is closed. Now, under the laws of basic set theory (specifically De Morgan's Law), the complement of a union is the intersection of complements and the complement of an intersection is the union of complements. Hence, the axioms for closed sets change a little for a topological space \(X\):

- The empty set \(\emptyset\) and the ambient space \( X \) are both closed
- The union of finitely many closed sets is closed
- The arbitrary intersection of closed sets is closed

Again, if you were to look at most textbooks, (\(2\)) would say something more like the union of two closed sets is closed — the result simply follows by induction.

So what makes the closed sets \( [0, 1] \) and \( \{ 2^{-100}, 2^{-99}, \dots, 2^{0} \} \) so much different? The answer is compactness (though if you answered connected you would also be correct). Back when mathematicians studied metric spaces (which had further generalized things like Hilbert spaces and Banach spaces), they thought that the unique property of the set \( [a, b] \) in \(\mathbb{R}\) was something that we now call Limit Point Compactness. However, the concept of Hausdorff spaces (which I will introduce later) began to confuse mathematicians as to whether the nice properties came from Limit Point Compactness, Sequential Compactnesss, or regular compactness. Eventually, mathematicians would stumble upon the most generalized form of compactness in a topological space.

The first thing I need to define is the concept of an open cover. Let \(X\) be my ambient space. A collection \(\{ V_\alpha \}\) is simply called a __cover__ if \( \bigcup_{\alpha}V_\alpha = X \). This makes sense — we have a bunch of sets, and we say that the collection is a cover if it LITERALLY covers the space. We call such a cover an *open cover* if the collection is comprised of only open sets (shocker).

Alright, enough with the foreplay. Let \(X\) be a topological space — we call \(X\) compact if every open cover has a finite subcover.

The first time I saw this definition, my initial thought was '*what does throwing a collection of open sets on top of my initial set have anything to do with the interval* \([a, b]\)?' I didn't really get a good answer until like the fifth time I saw the Heine-Borel Theorem, which basically meanders along a heuristic path until you've finally dealt with enough contradictions to be convinced compactess means closed and bounded in \(\mathbb{R}\).

As a brief aside, a difficult thing for people to understand outside of mathematics or philosophy is the use of existential quantifiers and universal quantifiers. When I say that a space is compact if every open cover has a finite cover, that does not mean I can just go and choose some random open cover for a set to show it's compact. In fact, to show a space is not compact, I simply have to find the existence of one open cover that does not have a finite cover. For that reason, it is orders of magnitudes easier to show a set is not compact than it is to show that a set is compact.

I'll now introduce a few important theorems:

Most of the additional compactness theorems begin to go out of our scope; however, after we introduce Hausdorff spaces in the next section, we will see a brief compactness come into play. For those who wish to study analysis or partial differential equations, you will find that compactness begins to play a huge role in terms of compact support.

If you remember from earlier, I discussed that an important property of open sets in Euclidean space was that I could pick any point in the open set and know that I can still find some ball of positive radius around the ball that is still contained in my open set. Now imagine that instead of looking at a point inside an open set, I'm looking at two distinct points, \(x\) and \(y\), on the real line. As long as \(x \neq y\), I can always find an open ball around each point such that the two open balls do not intersect.

For example, supose I have the points \(x = 0\) and \(y = 10^{-1000}\). Then I can simply take my radius to be \(r = \frac{1}{3} \cdot 10^{-1000}\) so that the balls \(B_r(x)\) and \(B_r(y)\) are disjoint.

This is the idea behind a Hausdorff space.

Before I go any further, I'm going to have to change my vocabulary a bit for you guys; as much as I love talking about balls, topologists generally refer to an open set containing a specified element as a neighborhood. One reason we switch from balls to neighborhoods is that, topologically at least, it does not make a difference whether a set is centered at a point or merely contains it.

Alright, with that behind us it's time to formally define a Hausdorff space. We say that a space \(X\) is Hausdorff if given any two distinct points \(x \neq y\), there exist disjoint neighborhoods \(U, V \subset X\) such that \(x \in U\), \(y \in V\), and \(U \cap V = \emptyset\).

We are now beginning to build up a bit of structure. A Hausdorff space is still a longshot from where we want to be, but it gives us just enough to work with to introduce two new theorems:

The Hausdorff property will prove to be a useful asset when it comes time to introduce manifolds next chapter. In fact, the notion of homeomorphisms will also allow us to define useful local properties of manifolds. With that said, all that's really left is the concept of a basis.

When topologies were introduced earlier in this article, it became apparent that many open sets in a topology were merely unions of smaller open sets. Consider the discrete topology on the set \(X = \{ a, b, c, d \} \)

$$ \tau_D = \{\emptyset, \{a\}, \{b\}, \{c\}, \{d\}, \{a, b\}, \{a, c\}, \{a, d\}, \{b, c\}, \{b, d\}, \{c, d\}, \\ \{a, b, c\}, \{a, b, d\}, \{a, c, d\}, \{b, c, d\}, X\}$$There are only four elements (not including the empty set) that cannot be broken down any further: the singletons. Every open set in \(X\) can be formed by combining some number of elements from \( \{a\}, \{b\}, \{c\}, \{d\}\) — this is the idea behind a basis.

Formally, a __basis__ \(\mathcal{B}\) for a topology \(\tau\) over \(X\) is a collection of sets in \(\tau\) that satisfies:

- \(\mathcal{B}\) covers \(X\)
- If \(B_1 \cap B_2 \neq \emptyset\) for \(B_1, B_2 \in \mathcal{B}\) and \(x \in B_1 \cap B_2\), then there exists some \(B_3 \in \mathcal{B}\) with \(B_3 \subseteq B_1 \cap B_2\) and \(x \in B_3\)

Property \(2\) may seem a bit obscure, but it essentially ensures that our basis captures the smallest elements possible so that it can actually generate the topology as desired (in other words, you can represent big sets with small sets, but you can't represent small sets with big sets).

Bases become powerful tools used to streamline proofs, which would otherwise be burdensome to tackle. It is much easier to prove a fact for a small family of basis elements and observe how the result holds under unions than it is to prove that fact for a huge family of open sets. For example, if I wanted to prove the statement:

Every open set in \(\mathbb{R}\) is measurable

I would simply prove the statement for an arbitrary open ball \(B_\epsilon(x)\).

Recall from earlier that a set is said to be countable if it is finite or has a bijective correspondence with \(\mathbb{N}\). When it comes to topological spaces, we no longer care about the underlying number of *elements* in our set but the number of *open sets* in our topology. Note, however, that many of our open sets are merely just unions of smaller open sets. Therefore, what we *REALLY* care about is the size of our basis. We call a topological space second countable if there exists a countable basis for the topology.

For example, \(\mathbb{R}\) is second-countable if you consider the basis made up of open balls centered at rational points with rational radii \(\mathcal{B} = \{ B_q(p) : q, p \in \mathbb{Q} \}\).

We have one final proof and then we're done. To start off, given a topological space \(X\) and set \(S \subset X\), we say that a point \(p \in X\) is a limit point of \(S\) if every neighborhood containing \(p\) intersects \(S\) (that is, any open set containing \(p\) must also contain a point of \(S\)).

The closure of \(S\), denoted \(\overline{S}\), is defined to be the union of \(S\) along with its limit points. We say that \(S\) is dense in \(X\) if \(\overline{S} = X\). For example, the rational numbers \(\mathbb{Q}\) are dense in \(\mathbb{R}\) since between every two real numbers there exists a fraction (and thus every open ball must contain a rational number between its center and its boundary).

With that said and done, let me reiterate our definition of a manifold. A topological manifold of dimension \(n\) (also called an \(n\)-manifold) is a topological space \(X\) such that:

- \(X\) is locally Euclidean
- \(X\) is Hausdorff
- \(X\) is second-countable

Cool stuff — what are we gonna do with it? Well nothing yet. I've gone over a bunch of topics in this blog post, so I'm going to rest up a bit and wait until the next post in this series to talk about things like differential forms, bundles, Lie Groups, and everything else you've been hoping for years that someone would blog about. Thanks for reading! 😁