Advanced Encryption Standard (FIPS 197)

Introduction to Ciphers

In today's world, there is monumental effort to secure data that we wish to remain private. Information like our social security number, credit card data, and passwords should all be hidden from the general public (and for good reason). Despite this, we seldom ask ourselves how this data is actually hidden. If the title hasn't already given it away, the answer is encryption.

If you're in the vast majority of the population, you probably don't encounter block ciphers in your day-to-day life. This shouldn't be an issue though, as this article is meant to be understood by anyone with a bit of coding experience.

The first week that I started writing ciphers for NIST, my mentor came up to me and gave a helpful analogy: a block cipher is like an old-fashioned safe. The cipher itself is the safe, while our data is some valuable. The catch is that our key is still called a 'key', but it's really just a big number of our choosing used for calculations.

A visual depiction of how a cipher is similar to a safe

Why AES?

Great question. It would take greater minds than mine to answer that question fully.

The simple answer is that AES remains very resiliant under years of scrutiny without sacrificing too much speed. This does not mean AES is necessarily efficient as a lightweight cipher (see PRINCE); nonetheless, would you rather keep your money in a national bank or in a safe under your bed?

For those interested in the security aspect, my previous analogy begins to break down when it comes to the motive of a hacker / thief. Just as a thief is interested in the money inside a safe, one would expect that the motive of a hacker is to steal passwords / sensitive information; however, it is truly the motive of a hacker to obtain our key (I'm interchanging hacker and cryptanalyst a bit loosely — in reality I think most hackers would be perfectly happy with a few hundred passwords). Therefore, the objective of a cipher is to make it as hard as possible to retrieve the key.

To give a little more detail, all computer information is stored in binary — thus, our key is just a fixed number of bits (128, 192, or 256 for AES). For every bit that we add the number of possibilities for our key doubles, so as a result our key space doubles as well. In order to break a cipher, one must find some sort of algorithm which discovers the key with greater efficiency than a brute-force solution (though I have not gone into the details of AES yet, this is the rationale behind adding extra rounds to larger key sizes).

To my knowledge, there are currently no non-side-channel attacks besides a biclique attack that only has theoretical improvement from \(2^{128} \) to \( 2^{126} \) (theoretical meaning this attack is still not practical for modern computers). Though there are other ciphers that are nearly as strong as AES (i.e. Serpent and Twofish), the attention given to AES over the past few decades has allowed for speedups and security updates to patch the commonly-known side-channel attacks.

The Cipher

Now that you have all the background, it's time to discuss the actual algorithm of AES.

AES is what cryptographers call a Substitution-Permutation Network (SPN): every round / iteration of the algorithm takes our chunk of data, breaks it into smaller chunks that are fed through a substitution box, and then swaps the bits around according to some sort of permutation.

In order to truly understand what is going on in AES, the reader would first have to become fimiliar with the basics of polynomial rings and Galois fields (also referred to as finite fields) — I will leave a section at the end to cover these things for those who are interested.

As I briefly mentioned in an aside earlier, the AES algorithm is somewhat dependent on key-length (which is either 128 bits, 192 bits, or 256 bits) — however, the size of the data we wish to encrypt is fixed at 128 bytes. The reason I say 'somewhat' is because AES is essentially a collection of functions ( SubBytes(), ShiftRows(), MixColumns, and AddRoundKey() ) that are repeated; each iteration of these functions is referred to as a round. Each of AES-128, AES-192, and AES-256 execute the same code — it's merely the number of rounds that differ for each variant. In particular AES-128 executes 10 rounds, AES-192 executes 12 rounds, and AES-256 executes 14 rounds. Increasing the number of rounds makes it so that the computation time of most cryptanalysis takes just as long as a brute-force solution.

With that said, here's a breakdown of the functions that are executed in each round of the algorithm.

Key Schedule

At this point, you're likely wondering how the key acutally fits into the cipher: is there some sort of lock hidden in the code where the key mysteriously fits in, maybe some hash function that checks authenticity? Nope.

As I briefly touched upon earlier, the key is basically a large number used for arithmetic operations in \(GF(2^8)\); but even that isn't entirely true, as the key itself is only used in the first round for key-whitening. For later operations, the key is actually used to generate what are called round keys in a key schedule. Many ciphers utilize the concept of a key schedule, as it greatly improves upon the security of a key. For example, if a cryptanalyst were to gain access to one of our round keys through extensive attacks, the strength of our key schedule would determine whether that is enough information to compute the original key (in the case of AES, it is not but allows the cryptanalyst to gain access to other round keys which collectively can accomplish the task).

The AES key schedule is broken down into two subroutines, SubWord() and RotWord(), which are added in \(GF(2^8)\) to a round constant. The gist of the key schedule is that the previous round key is added to the current round key so that, as we continue along the key schedule, we experience a similar avalanche effect to that of SHA. Implementing the subroutines is incredibly straightforward: RotWord() cyclically rotates the bytes of the current round key as if they were a wheel, and SubWord() simply breaks the round key up into individual bytes and sends them through the SBox (which is explained in next section). Here is the code for each:

Since the size of data that we wish to encrypt is fixed at 16 bytes (hence the term "block" cipher — we only encrypt one 16-byte block at a time), we choose to represent this data as a 4x4 matrix of bytes which will refer to from now on as the state: $$\begin{pmatrix} p_0 & p_4 & p_8 & p_{12} \\\\ p_1 & p_5 & p_9 & p_{13} \\\\ p_2 & p_6 & p_{10} & p_{14} \\\\ p_3 & p_7 & p_{11} & p_{15} \end{pmatrix}$$ I'm using the variable \(p_i \) for \( 0 \leq i \leq 15 \) here to denote splitting of plaintext into bytes. Our goal is to add 4 round keys to the state each round (since a round key is only a word and the state is 4 words). In order to do this, we tend to think of the state as an array of 4 columns, and perform such operations column by column.

Now the first four round keys are simply the key itself; however, all subsequent round keys are added to the previous four round keys after they have passed through the SubWord() and RotWord() subroutines. Since our first four round keys are used for key-whitening, we actually have one additional round key which is used for the last round. Thus, for AES-128, there need to be 4 round keys for each of 10 + 1 rounds; for AES-192, there need to be 4 keys for each of 12 + 1 rounds; and for AES-256, there need to be 4 keys for each of 14 + 1 rounds. I am confident you can do the math.

Heres an example of the full key schedule:

Where the round constants denoted by Rcon are simply the first byte of \(2^{i-1}\) in \(GF(2^8)\). I'll provide the constants in the array below:


The first item on our list of round functions is that pesky "substitution" part of our whole substitution-permutation network. The reason our substitution layer is so important is that it provides a non-linear layer in \(GF(2^8)\), so that cryptanalysis cannot reduce the cipher to a linear system of equations (which would allow the cipher to be easily broken using GPUs and a sufficient knowledge of finite field arithmetic).

Now let \( b \) be an arbitrary byte from our state above (i.e. \( b = p_i \) for some \( 0 \leq i \leq 15\) ) and let \( b_j \) denote the \( j^{th}\) bit for \( 0 \leq j \leq 7 \). The transformation which the Substitution layer is applying to \(b\) is $$ \tilde{b_i} = (b^{-1})_i \oplus (b^{-1})_{(i + 4)\mathrm{mod} 8} \oplus (b^{-1})_{(i + 5)\mathrm{mod} 8} \oplus (b^{-1})_{(i + 6)\mathrm{mod} 8} \oplus (b^{-1})_{(i + 7)\mathrm{mod} 8} + c_i $$ where \(c_i\) denotes the \(i^{th}\) bit of the fixed constant \( 01100011\). Note that the symbol \(\oplus \) here represents addition in \(GF(2^8)\), which just so happens to be the exclusive-or XOR operation (i.e. \(0 \oplus 0 = 1 \oplus 1 = 0 \) and \(1 \oplus 0 = 0 \oplus 1 = 1 \) ).

Now if we decided to apply that transformation bit by bit to a 128-bit state, we would be wasting a HUGE amount of computing power since inversion in \(GF(2^8)\) is incredibly taxing in terms of clock cycles; instead, the National Institute of Standards and Technology (NIST) was gracious enough to provide a pre-computed lookup-table:

Now that we have our lookup table, creating a function to send the state through the substitution layer is pretty simple:


Next on our agenda is the ShiftRows() method, which does just what the name suggests. Using zero-indexed origin (i.e. for \(0 \leq i \leq 3\)), we cyclically shift each row in the state to the left by \(i\) bytes: $$ \begin{pmatrix} p_0 & p_4 & p_8 & p_{12} \\\\ p_1 & p_5 & p_9 & p_{13} \\\\ p_2 & p_6 & p_{10} & p_{14} \\\\ p_3 & p_7 & p_{11} & p_{15} \end{pmatrix} \longmapsto_{ShiftRows()} \begin{pmatrix} p_0 & p_4 & p_8 & p_{12} \\\\ p_5 & p_9 & p_{13} & p_1 \\\\ p_{10} & p_{14} & p_2 & p_6 \\\\ p_{15} & p_{3} & p_{7} & p_{11} \end{pmatrix} $$


Third on our list is the MixColumns() function — this is where things start to get trickier. The reason things get tricky is because this function is an affine (linear) transform over \(GF(2^8)\), and is thus heavily dependent on polynomial multiplication. I briefly mentioned in the SubBytes() section that inversion (i.e. division) in a Galois field is computationally taxing in terms of clock cycles — the same applies to multiplication, except a few extra steps are taken out. The exact rationale for MixColumns() is explained in the additional content for decryption. Overlooking the details, we have the transform: $$ \begin{pmatrix} 02 & 03 & 01 & 01 \\\\ 01 & 02 & 03 & 01 \\\\ 01 & 01 & 02 & 03 \\\\ 03 & 01 & 01 & 02 \end{pmatrix} \begin{pmatrix} p_0 & p_4 & p_8 & p_{12} \\\\ p_1 & p_5 & p_9 & p_{13} \\\\ p_2 & p_6 & p_{10} & p_{14} \\\\ p_3 & p_7 & p_{11} & p_{15} \end{pmatrix} $$

We have two options here: since multiplication isn't as hard as division in \(GF(2^8)\), we could find an algorithm for how to multiply by small numbers. Alternatively, we could just do what we did before with the substituion layer and precompute everything into a lookup-table. Since each has their own advantage (space vs. time tradeoff), I'll go ahead and provide both.

For the explicit multiplication approach, recall that a 'carry-over' digit was used in elementary school multiplication when one digit overflowed into another. The idea doesn't change just because we have switched fields from \(\mathbb{R}\) to \(GF(2^8)\), but we must be cautious to use the right binary operator. Ultimately, the gmult() function below relays this idea using the proper XOR:

Alternatively, we could simply precompute each number that we anticipate multiplying by and store the results in lookup-table. As I mentioned with the space-time tradeoff, this clearly takes up more space (256 bytes for each table) but does reduce clock cycles.

Multiplication by 0x02:

Multiplication by 0x03:

With Galois field multiplication for small numbers in our arsenal, the implementation of MixColumns() is now fairly straightforward. What we have is:


Again, it's just what it sounds like. If you didn't feel like reading over the Key Schedule section but want to actually understand why this routine is vital to the cipher, then I can't do much help — the key schdule makes the round keys which are used by AddRoundKey().

Alright, so hopefully you understand that there are 4 times as many round keys as there are rounds (plus 4 used for key-whitening). If this is still unclear to you, we reached this number because the state is 4 times the size of the round key (or conversely, each round key is 1/4 the size of the state). In reality, if we were to mesh the round keys together four at a time, we would have a full 128-bit round key for each round. If you recall from the SubBytes section, addition in \(GF(2^8)\) is the same as bitwise exclusive-or XOR so we wind up with the following code:

Encryption Code

Now that we have the main components of AES down, we're ready to put it all together in a cogent, continuous block. For all variants of AES, we perform what is what is called key-whitening on the state, such that the AddRoundKey() function applies our key to the initial data. Following that we perform exactly one less than the total number of rounds in the following format:


Lastly, we perform one final round where we merely take out the MixColumns() function — and that's it! That's the AES block cipher. I hope you enjoyed reading this almost as much as I didn't enjoy writing the HTML. I will provide the additional content on Galois fields below and a much briefer section on decryption. However, if your goal was to understand the current most popular cipher, this is effectively the end. Thanks for reading! 😁

Additional Content: \(GF(2^8)\)

Oh, it's you again.

Alright, you've made it this far so there's only a little bit left to understand: what is this \(GF(2^8)\) symbol that we keep on seeing? Well, for starters, that is the Galois Field of order 256 — the number of values an unsigned byte (8 bits, hence the 8) can represent. But what is a field? For anyone pursuing pure mathematics, you can jump to the next paragraph. For everyone else, what I'm about to go over may be a bit of an abstract concept.

Imagine you have some set of objects and you want to know if you can perform standard mathematics on those objects in a similar fashion to the way most people use addition and subtraction on numbers. To find this out, you need to be a little more specific: what kind of mathematics do you want to do?

  • Addition and subtraction? You want a Group
  • Multiplication? A Ring
  • Multiplication AND Division? You'll need a Field (THIS IS US)
  • Integration? Measure Space
  • Differentiation? A ( \( C^1\) at minimum, but ideally smooth) Manifold
  • Fourier Analysis? A Hilbert Space

Okay back on topic — a Galois Field, which is also known as a finite field, is basically what the second name suggests. It is a finite set of objects that supports addition, subtraction, multiplication, and division. According to an important theorem, any Galois field must be of order \(p^k\) for some prime \(p\) and some positive integer \(k\). Thus, the elements of our Galois field are essentially just numbers represented by some prime base. That said, all is well for our Rijndael field (the technical name for \(GF(2^8)\) ) since 2 is prime and 8 is positive.

I'm not actually going to explain Galois fields in the context of the Rijndael field \(GF(2^8) \) in this section, but abstractly for some prime \(p\) and positive integer \(k\).

Suppose \(a \in GF(p^k) \). Then we can represent the element \(a\) base \(p\) as follows: $$ a = a_{k-1}p^{k-1} \oplus a_{k-2}p^{k-2} \oplus \dots \oplus a_{1}p \oplus a_0$$ with coefficients \(a_i \in \{0, 1, \dots, p-1 \} \) for \(0 \leq i \leq k-1 \) (where \( \oplus \) is the binary operation replacing addition in this field). Now lets take a second to see what's going on here - think about binary, specifically a byte in our Rijndael field. Well, our byte is some collection of 1's and 0's... but that makes sense, as our prime is 2 so our coefficients can only be \(a_i \in \{ 0, 1\}\)! What about ternary numbers in the Galois field \( GF(3^4) \)? In that scenario, our coefficients can only be 0, 1, or 2. Assuming our binary operation is just normal addition, how would we translate the element 2102 in \( GF(3^4) \)? Well, going back to the rationale above, this is really $$ 2\cdot 3^{4-1} + 1 \cdot 3^{4-2} + 0 \cdot 3^{4-3} + 2\cdot3^{4-4} = 65 $$ And there you have it, 2102 is the representation of 65 in \( GF(3^4) \).

But why is this any better? Why can't we just represent 65 as 65 (in decimal) and be content with it? Well there are two main points as to why not:

  1. Computer architecture is not historically represented that way. A bit is really an electrical charge in a capacitor, which is always at one of two states (i.e. 0 for low charge, 1 for high charge).

  2. The second (and better) answer is what I'm about to explain in the next paragraph: binary operations in the Galois Field are not the same as they are over the integers. By binary operations, I'm referring to multiplication and addition (but in their modified format).

I'll begin with the easier of the two binary operations operations: addition in \(GF(p^k)\). When we add two coefficients, say \(a_i\) and \(b_i\), we're actually adding them in the coefficient Group of order \(p\). According to Lagrange's Theorem, since this group is of prime order it must be cyclic (a straightforward proof). Thus, addition of coefficients is much like modular arithmetic. For example, consider \(GF(5^3) \) with the two numbers \( 413 \) and \( 232 \):

\[ \ \ \ 413 \\ +232 \\ \_\_\_\_\_\_ \\ \ \ \ 140 \]

At this point you may be telling yourself, "That's just objectively wrong: 3 + 2 is 5 not 0." Well, you'd be right if this were the integers. However, as I mentioned earlier, the addition is like modular arithmetic with respect to our base prime (which is 5). Thus, we are actually evaluating the expression (3 + 2) mod 5, which does happen to be equal to 0. The same rationale applies to each digit.

For the sake of completeness, I mentioned earlier that addition in \( GF(2^8)\) is equivalent to the exclusive-or (XOR) operation. This is pretty easy to show — recall that XOR works in the following way:

P Q P ^ Q

Well whenever our base prime is 2 (i.e. \(GF(2^k)\) ), we essentially have the same behavior since all coefficients are either 1 (true) or 0 (false) and \( 1 \oplus 1 = (1 + 1) \text{mod} 2 = 0 \).

An even more helpful fact (which we build upon in the decryption stage) is that addition and subtraction are the same exact operation in \( GF(2^k) \)! In any group-theoretical setting, subraction is simply addition applied to the inverse element. For example, given some integer \(n\), subtraction by a second integer \(m\) is actually the same as addition by \(-m\) (which is \(m\)'s inverse with respect to addition). For any group, we have the trivial identity \( g \ominus g = g \oplus g^{-1} = e \) where \(g\) is the identity (which is 0 for the integers under addition). However, if you notice from the table above, every digit is its own inverse! Therefore, we also have

$$ g \ominus g = g \oplus g^{-1} = g \oplus g = e $$

In the more general setting of \(GF(2^k)\), consider the numbers 12 and 34 in \( GF(5^2) \) and suppose we want to find the answer to \( y = 12 \ominus 34 \). Then by associativity, we also have that our answer must satisfy \( y \oplus 34 = 12 \). The catch here is that numbers wrap back around when they become too large, so we can reverse engineer the solution to be 33 (since \( 3 \oplus 3 = (3 + 3)\, \text{mod}\, 5 = 1 \) and \( 3 \oplus 4 = (3 + 4)\, \text{mod}\, 5 = 2\) ).

Now comes the hard part: multiplication. Conceptually it's not much different, in that we choose a number called our irreducible polynomial (the irreducible polynomial in AES is \( x^8 + x^4 + x^3 + x + 1 \)), which we use for the same purpose of allowing the result to wrap back around when it becomes large. The difficulty here lies in the fact that our coefficients collectively affect each coefficient of the other term. When we perform regular multiplication, we are only really working with two coefficients; on the other hand, when we multiply two polynomials, each term of the first polynomial affects each term of the second polynomial (what most middle-schoolers learn as FOIL). Galois Field multiplications works the exact same as polynomial multiplication, and for that reason many people actually choose to represent information in polynomial format when it comes time for calculations.

For example, let's consider multiplication in \( GF(3^3) \) with primitive polynomial \(1102 = x^3 + x^2 + 2 \). Take the numbers 201 and 11 — translating them into polynomials we get \( 2x^2 + 1 \) and \(x + 1\). By the distributive property (i.e. FOIL), we get $$ (2x^2 + 1)(x + 1) = 2x^3 + 2x^2 + x + 1 $$ However, this answer is too large for \(GF(3^3)\) so we must take the answer modulo \(x^3 + x^2 + 2\) (using polynomial division) to get $$ (2x^3 + 2x^2 + x + 1) \text{mod} (x^3 + x^2 + 2) = x$$ which translates to 3. Thus, we find that in \( GF(3^3) \) with irreducible polynomial \(x^3 + x^2 + 2\), the equation \( 19 \otimes 4 = 3 \) holds (I translated from ternary back to base 10). Neat stuff, huh?

As you can see, it is much more computationally intensive to do multiplication in \(GF(2^8)\) than it is to do regular multiplication over the integers \(\mathbb{Z}\). To see why the SBox from SubBytes() provides a level of non-linearity, recall that the SBox performs the calculation $$ \tilde{b_i} = (b^{-1})_i \oplus (b^{-1})_{(i + 4)\mathrm{mod} 8} \oplus (b^{-1})_{(i + 5)\mathrm{mod} 8} \oplus (b^{-1})_{(i + 6)\mathrm{mod} 8} \oplus (b^{-1})_{(i + 7)\mathrm{mod} 8} + c_i $$ in \(GF(2^8)\). After experiencing for yourself how much more complex multiplication is in a finite field, it shouldn't be hard to see that Galois Field arithmetic prevents cryptanalysts from representing the cipher as a system of even thousands of linear equations.

Additional Content: Decryption

It's likely become evident to you at this point, but for the reader who hasn't put too much thought into it: how do you plan to get back your data now that it has gone through a complex encryption? Sure, you have the key, but I never once told you that the cipher above is its own inverse (because it isn't). Building upon earlier analogies, I have given you a way to put your valuables in a safe, but have not told you anything about how to get them out of the safe. I chose to put this section off until the end due to the fact that many of the inverse operations in AES' decryption require knowledge of \( GF(2^8) \) from the previous section.


Proceeding in the same order as encryption, the first routine that we wish to find an inverse for is SubBytes(). If you recall, the non-linearity of the SBox came from the transformation: $$ \tilde{b_i} = (b^{-1})_i \oplus (b^{-1})_{(i + 4)\mathrm{mod} 8} \oplus (b^{-1})_{(i + 5)\mathrm{mod} 8} \oplus (b^{-1})_{(i + 6)\mathrm{mod} 8} \oplus (b^{-1})_{(i + 7)\mathrm{mod} 8} + c_i $$ which can be done iteratively by first applying the inverse in \( GF(2^8)\) and then applying the transformation $$ \tilde{b_i} = b_i \oplus b_{(i + 4)\mathrm{mod} 8} \oplus b_{(i + 5)\mathrm{mod} 8} \oplus b_{(i + 6)\mathrm{mod} 8} \oplus b_{(i + 7)\mathrm{mod} 8} + c_i $$ We could apply the same affine transformation, and take the inverse of the result; however, note that \(b_{(i + j)\text{mod}8}\) refer to the bits of our original byte before going through the subtitution layer (i.e. the byte we are trying to compute). To perform this mathematically we would need more information about the original bytes before encryption, thus introducing a sort of Catch-22.

Fortunately, we have a giant SBox which stores all the results of our SubBytes() mapping. Even more fortunate, our SBox has no collisions (i.e. it's injective) and maps to every possible byte value (i.e. it's surjective). For those who have a couple weeks of linear algebra under their belt, it should be clear that an inverse exists. To construct this inverse, just find which output byte is mapped to under which input byte. Luckily, someone already did the work for you:


If this one isn't obvious, recall that our original ShiftRows() simply shifted the first row cyclically to the left by one, shifted the second row cyclically to the left by two, etc. Well, just replace left by right and you're golden.


After all that we've gone over, we're finally at what I consider to be the make-or-break point of understanding the AES cipher. That is, you either get why this works or you haven't a damn clue and you're just copying and pasting code at this point (no shame in it).

Our original MixColumns() function was simply matrix multiplication, so it makes sense that the inverse function is simply the inverse of the matrix! Easy right? Not so much. The original matrix multiplication was in \(GF(2^8)\), so unfortunately I'm going to have to jump around a bit and leave the reader to some research on how the Extended Euclidean Algorithm fits in.

Before all that noise, it's time for a little more detail on our original MixColumns() since we have a basic knowledge of \(GF(2^8) \). Suppose we have two polynomials \( a(x) = a_3x^3 + a_2x^2 + a_1x + a_0 \) and \( b(x) = b_3x^3 + b_2x^2 + b_1x + b_0 \) and we want to multiply them. From high school algebra, we would simply get $$ a(x) \otimes b(x) = a_3b_3 x^6 + (a_2b_3 + a_3b_2)x^5 + (a_1b_3 + a_3b_1 + a_2b_2)x^4 \\+ (a_0b_3 + a_3b_0 + a_1b_2 + a_2b_1)x^3 \\+ (a_2b_0 + a_1b_1 + a_0b_2)x^2 \\+ (a_1b_0 + a_0b_1)x + a_0b_0 $$ What the developers of the Rijndael cipher (more commonly known as AES) did was they chose the irreducible polynomial \(x^4 + 1\) for the MixColumns() step, since it has the nice property that \(x^i \text{mod}\,(x^4 + 1) = x^{i\, \text{mod} 4} \). Thus, our new polynomial multiplication simplifies to $$ a(x) \otimes b(x) = (a_3b_0 + a_2b_1 + a_1b_2 + a_0b_3)x^3 + (a_2b_0 + a_1b_1 + a_0b_2 + a_3b_3)x^2 \\+ (a_1b_0 + a_0b_1 + a_3b_2 + a_2b_3)x \\+ (a_0b_0 + a_3b_1 + a_2b_2 + a_1b_3)$$ which we can represent by matrix multiplication as $$ \begin{pmatrix} a_0 & a_3 & a_2 & a_1 \\ a_1 & a_0 & a_3 & a_2 \\ a_2 & a_1 & a_0 & a_3 \\ a_3 & a_2 & a_1 & a_0 \end{pmatrix} \begin{pmatrix} b_0 \\ b_1 \\ b_2 \\ b_3 \end{pmatrix} $$ Therefore, we come to the new conclusion that the MixColumns() step is actually multiplying each row by the polynomial \( a(x) = 3x^3 + x^2 + x + 2 \). In this case, it's no longer neceassary to brute-force the inverse matrix in \( GF(2^8) \), but instead find the inverse polynomial.

In order to find the inverse polynomial, you can either apply the Extended Euclidean Algorithm as provided above, or use a nice little algebraic trick: we have a cyclical subgroup of \( GF(2^8) \) (polynomials under irreducible polynomial \( x^4 + 1 \)) which forms an integral domain. In other words, there are 256 degrees of freedom for any three of the coefficients and 255 degrees of freedom for the last coefficient. Now for any cyclic group of order \( r \), we have that \( g^r = e\) where \( g \) is any element and \(e\) is the identity. Thus, to find the inverse of \( g \) we utilize the simple fact that \(g \cdot g^{r-1} = g^r = e\), so \(g^{r-1}\) must be the multiplicative inverse. Therefore, (in our subgroup) the inverse of any polynomial \( g(x) \) is \( (g(x))^{4278190079} \) (since \(255 \cdot 256^3 = 4278190080\)). However, to use this approach, you would need to be somewhat savvy with a mathematical programming language such as Maple.

Hopefully you don't actually go and write a Maple program to compute this, because the AES document specifies that the inverse polynomial of \(a(x) = 3x^3 + x^2 + x + 2 \) is \( a^{-1}(x) = 11x^3 + 13x^2 + 9x + 14 \). Really dodged a bullet there.

With that said, our InvMixColumns() step can now be represented by the transform: $$ \begin{pmatrix} 0e & 0b & 0d & 09 \\\\ 09 & 0e & 0b & 0d \\\\ 0d & 09 & 0e & 0b \\\\ 0b & 0d & 09 & 0e \end{pmatrix} \begin{pmatrix} p_0 & p_4 & p_8 & p_{12} \\\\ p_1 & p_5 & p_9 & p_{13} \\\\ p_2 & p_6 & p_{10} & p_{14} \\\\ p_3 & p_7 & p_{11} & p_{15} \end{pmatrix} $$ As before you can feel free to use a lookup table for the matrix instead of using the gmult() function (note, however, that this will use up 1 kB of ROM):


Just kidding, there is no InvAddRoundKey() — from the \( GF(2^8) \) section above you should understand that addition and subtraction are one in the same, so you simply add the round key right back.

Alright, it's finally time to put the pieces of our decryption together. The forward encryption for AES is described by applying the following functions every round (except the last):

so it should be intuitive that the decryption for AES is described by applying

to every round in reverse order except the first (or last depending on how you think about it). In either case, here is the full decryption: