The Convolution Theorem

The Basis

In Linear Algebra, we’re used to build a vector $\mathbf{v}$ out of other vectors $\mathbf{v_1}, \mathbf{v_2}$.

\begin{equation} \mathbf{v} = \mathbf{v_1} + \mathbf{v_2} \end{equation}

Each vector is, at the very least, implicitly constructed out of its basis vectors. If there is no specific basis mentioned anywhere, we assume it to be a basis of unit vectors. For instance, for a two-dimensional space these are $\mathbf{\Phi_1} = (1, 0)$ and $\mathbf{\Phi_2} = (0, 1)$, and that a vector $\mathbf{v} = (3, 2)$ is just the product of scaled basis vectors.

\begin{equation} \left( \begin{array}{c} 3\\ 2 \end{array} \right) = 3 \cdot \mathbf{\Phi_1} + 2 \cdot \mathbf{\Phi_2} \end{equation}

The numbers $3$ and $2$ are the coefficients of the basis vectors $\mathbf{\Phi_i}$. We can call them $c_i$.

\begin{equation}\label{vector_reconstruction_sample} \mathbf{v} = c_1 \cdot \mathbf{\Phi_1} + c_2 \cdot \mathbf{\Phi_2} \end{equation}

The same is true for functions. We can build a function $v(x)$ out of other functions $v_1(x)$ and $v_2(x)$.

\begin{equation} v(x) = v_1(x) + v_2(x) \end{equation}

We can likewise represent one function as the product of a set of basis functions, just like we can represent one vector as a product of basis vectors.

\begin{equation} v(x) = c_1 \cdot \Phi_1(x) + c_2 \cdot \Phi_2(x) \end{equation}

The only question is, how do we know the scaling coefficients (i.e., $c_1$ and $c_2$) to build the desired function with a set of basis functions?

Function Transforms

When transforming a vector $\mathbf{v}$ from one basis to another, what we do is to multiply the vector $\mathbf{v}$ by each basis vector $\mathbf{\Phi_i}$ to get the new coefficients. The multiplication operation that we do is the dot product, or more generally the inner product $\langle, \rangle$, a kind of matrix multiplication to project $\mathbf{v}$ onto each basis vector $\mathbf{\Phi_i}$.

\begin{equation} \langle \mathbf{v}, \mathbf{\Phi_i} \rangle = c_i \end{equation}

This can also be written as a product of the individual vector components $v_k$ and $\Phi_{ik}$.

\begin{equation}\label{vector_basis_transform} \sum_k v_k \cdot \Phi_{ik} = c_i \end{equation}

To reconstruct vector $\mathbf{v}$ in the new basis, we simply multiply the basis by the coefficients.

\begin{equation}\label{vector_basis_reconstruction} \sum_i c_i \cdot \mathbf{\Phi_i} = \mathbf{v} \end{equation}

Notice how Equation \ref{vector_basis_reconstruction} is just the general form of Equation \ref{vector_reconstruction_sample}. Transforming a function $v(s)$ into a set of coefficients $c_i$ for a function basis $\Phi_i(s)$ is almost the same process as in Equation \ref{vector_basis_transform}. Here we have to integrate it over the function domain $S$ with each basis function $\Phi_i$ individually.

\begin{equation}\label{function_basis_transform} \int_S v(s) \Phi_i(s) ds = c_i \end{equation}

Note that the only difference between Equation \ref{vector_basis_transform} and \ref{function_basis_transform} is that in one case we sum up a discrete representation (i.e., the vector components) and in the other we have to integrate it instead. This is why we write down an inner product rather than a dot product, because it is independent of the fact that the basis is one of functions, or vectors or anything else.

\begin{equation} \langle v, \Phi_i \rangle = c_i \end{equation}

Reconstructing the original function works just as in Equation \ref{vector_basis_reconstruction}.

\begin{equation}\label{function_basis_reconstruction} \sum_i c_i \cdot \Phi_i(s) = v(s) \end{equation}

Function Basis Properties

With vectors, we can use the inner product (that is, a dot product) to determine some properties about their relationship. For instance, we can tell from the projection whether or not vectors $\mathbf{\Phi_i}$ and $\mathbf{\Phi_j}$ are orthonormal to one another.

\begin{equation}\label{dirac_delta} c = \langle \mathbf{\Phi_i}, \mathbf{\Phi_j} \rangle = \mathbf{\Phi_i} \cdot \mathbf{\Phi_j} = \delta_{ij} = \left\{ \begin{array}{c} 1, i = j \\ 0, i \neq j \end{array} \right. \end{equation}

If the coefficient $c$ is $0$, then both vectors $\mathbf{\Phi_i}$ and $\mathbf{\Phi_j}$ are orthogonal to each other. Additionally if $\mathbf{\Phi_i}$ and $\mathbf{\Phi_j}$ happen to be identical and $c$ turns out to be $1$ and not an arbitrary number $n$, then we know that both vectors also form an orthonormal basis. $\delta_{ij}$ is called Kronecker Delta and is usually just a shorthand of such a basis behavior. With functions, the same concept is true, and the formula is identical. Instead of a dot product, the inner product for two functions $\Phi_i(s)$ and $\Phi_j(s)$ represents an integration. But again, if $c$ turns out to be $0$, we know both functions are orthogonal to each other, and if every other result is $1$, then the function basis is orthonormal.

\begin{equation}\label{dirac_delta_functions} c = \langle \Phi_i, \Phi_j \rangle = \int_S \Phi_i(s) \cdot \Phi_j(s) ds = \delta_{ij} = \left\{ \begin{array}{c} 1, i = j \\ 0, i \neq j \end{array} \right. \end{equation}

We now have all the tools necessary to define properties of a function space: Equation \ref{dirac_delta_functions} can be used to define orthogonality and orthonormality of two functions, and \ref{function_basis_reconstruction} can be used to define linear dependency (one function $\Phi_i$ is a sum of scalar multiples of other functions $\Phi_j$ of the same basis) and therefore also the rank and determinant of a function basis.

The Convolution Theorem

Now that we know how to transform a function into coefficients of a function basis, and how to test whether or not a function basis has certain properties like orthonormality, let’s perform a magic trick. Imagine we have an orthonormal basis $\Phi_i$, i.e. two functions of that basis will always integrate to $1$ if they are identical, or $0$ if they are not.

We can represent any function $f(s)$ by a bunch of coefficients in that basis. Imagine now that we have two functions: $f(s)$ and $g(s)$.

\begin{eqnarray} f(s) = \sum_i f_i \cdot \Phi_i(s) \label{f_reconstruction}\\ g(s) = \sum_j g_j \cdot \Phi_j(s) \label{g_reconstruction} \end{eqnarray}

Let’s multiply them together and integrate the result. We can replace the functions with the sum in Equation \ref{f_reconstruction} and \ref{g_reconstruction}.

\begin{equation}\label{convolution_step1} \int_S f(s) \cdot g(s) ds = \int_S \left( \sum_i f_i \cdot \Phi_i(s) \right) \cdot \left( \sum_j g_j \cdot \Phi_j(s) \right)ds \end{equation}

In Equation \ref{convolution_step1} we can reorder some things because everything is linear: We can move the two sums $\sum_i$ and $\sum_j$ out of the integral, together with their coefficients, because they do not depend on $s$.

\begin{eqnarray} \int_S f(s) \cdot g(s) ds & = & \int_S \left( \sum_i f_i \cdot \Phi_i(s) \right) \cdot \left( \sum_j g_j \cdot \Phi_j(s) \right)ds \\ & = & \sum_i \sum_j f_i \cdot g_j \cdot \int_S \Phi_i(s) \cdot \Phi_j(s) ds \label{convolution_step2} \end{eqnarray}

However, we just noticed in the previous section that a function basis can be orthonormal. An orthonormal function basis is very nice to have, because the integrated product of the function basis vectors (i.e., the inner product of two functions) will be either $1$ or $0$, so the integral in Equation \ref{convolution_step2} can be replaced with the Kronecker Delta from Equation \ref{dirac_delta_functions}!

\begin{eqnarray} \int_S f(s) \cdot g(s) ds & = & \int_S \left( \sum_i f_i \cdot \Phi_i(s) \right) \cdot \left( \sum_j g_j \cdot \Phi_j(s) \right)ds \\ & = & \sum_i \sum_j f_i \cdot g_j \cdot \int_S \Phi_i(s) \cdot \Phi_j(s) ds \\ & = & \sum_i \sum_j f_i \cdot g_j \cdot \delta_{ij} \label{convolution_step3} \end{eqnarray}

But if we have a Kronecker Delta, that just means that every time $i$ and $j$ are not equal, the product is just $0$. So we can simplify even further and remove one variable.

\begin{eqnarray} \int_S f(s) \cdot g(s) ds & = & \int_S \left( \sum_i f_i \cdot \Phi_i(s) \right) \cdot \left( \sum_j g_j \cdot \Phi_j(s) \right)ds \\ & = & \sum_i \sum_j f_i \cdot g_j \cdot \int_S \Phi_i(s) \cdot \Phi_j(s) ds \\ & = & \sum_i \sum_j f_i \cdot g_j \cdot \delta_{ij} \\ & = & \sum_i f_i \cdot g_i \label{convolution_step4} \end{eqnarray}

What is this? Equation \ref{convolution_step4} looks like dot product. So in essence that means we can shortcut an integration of a product of two functions $f(s)$ and $g(s)$ by a dot product of their coefficients $f_i$ and $g_i$ if the function basis is orthonormal!

Why is this important?


Filter operations always take on a formula just like Equation \ref{convolution_step1}. For instance, a Gauss Filter is a Gauss function that is, in a small window, multiplied with another function. One can do that in a pixel-based domain, where each pixel is multiplied with a value from the Gauss function at the same position, but it is really just two functions multiplied together.

However, if one part of the function is already available in a harmonic basis function, for instance as a JPEG image, then we can apply the filter in the function basis directly and do an operation which would normally be very expensive with a cheap dot product! If the number of coefficients is smaller than the operations necessary on the original domain $S$ of the function (that is, the number of pixels we need to sum up), we can save a tremendous amount of computation time.

Precomputed Radiance Transfer

Transfer function visualized. Every white pixel in this $360^{\circ}$ image is one which, evaluated by a raytracer, was blocked by some geometry. All other pixels represent values where a ray shot from this point could travel freely away from all geometry.

A main observation of Precomputed Radiance Transfer is the following: One can simplify the rendering equation to something we have seen above by throwing out self-emittance, and unify everything in the integral that isn’t incoming light into one homogeneous diffuse transfer function $T(\mathbf{\omega_i})$.

\begin{eqnarray} L(\mathbf{\omega_o}) & = & L_e(\mathbf{\omega_o}) + \int_\Omega L(\mathbf{\omega_i}) \cdot f(x, \mathbf{\omega_i}, \mathbf{\omega_o}) \cdot \langle \mathbf{N}, \mathbf{\omega_i} \rangle d\mathbf{\omega_i} \end{eqnarray}

\begin{eqnarray}\label{prt_integral} & \rightarrow & \int_\Omega L(\mathbf{\omega_i}) \cdot T(\mathbf{\omega_i}) d\mathbf{\omega_i} \end{eqnarray}

The transfer function is simply everything - materials, visibility, Lambert factor etc. - packed into a single function. A simple visualization is in the above figure, where the transfer function of a point shows a visibility function. Imagine now that next to a transfer function, we also have a function representing our environment light. The crucial observation is that Equation \ref{prt_integral} looks just like Equation \ref{convolution_step1}, and this means that if we can get both as coefficients of some basis function, we can compute the integral in Equation \ref{prt_integral} with just one simple dot product of their coefficients!

Why is this practical? Getting the coefficients is really expensive, so before we can do this we would need to compute all transfer functions (probably with a raytracer) which is already very costly, and then do the Monte Carlo estimation to get their coefficients. This is way too expensive to do in real-time. But, if the car in the figure never moves, then the transfer function never changes! So if we compute all transfer in an offline step and save it (one might almost be tempted to call this pre-computing the transfer), then we only need to get the coefficients for the light at runtime, and that should be fairly easy to do. In fact, if we only have a bunch of different lighting configurations, we can precompute the coefficient vectors for all of them, and at runtime when switching between skyboxes fetch the respective coefficients.


  1. Sloan et al, Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low- Frequency Lighting Environments. Peter-Pike
  2. Robin Green, Spherical Harmonic Lighting: The Gritty Details

Differential Rendering

Every graphics sub-discipline has its own holy grail, some go-to paper that everyone must have read to get into the necessary basics of the field: Physically Based Rendering has a book by the same name, Path Tracing has Eric Veach’s dissertation, real-time global illumination has Reflective Shadow...

Continue reading


I was at SIGGRAPH 2014 to present, among other things, a poster of a new relighting method using Voxel Cone Tracing. My schedule was as per usual completely overloaded. Nevertheless, here are some impressions.

My favorite part about the conference are the courses,...

Continue reading

On publication videos

I have reviewed my fair share of papers over the years and seen some heinous crimes in video editing for the customary media attachement: blurry, too large, strange codecs only recognized by VLC, mouse cursor and other GUI elements visible on screen, double letterboxing or outright interesting choices of aspect...

Continue reading

Notes on importance sampling

Some tutorials on importance sampling specular terms that are out in the wild have what I found to be an information gap: the step from the PDF to the actual sampling function is missing. Hopefully this step-by-step guide can help out one or two other confused readers. I checked all...

Continue reading