**Ellipsoid Mapping**
Pål-Kristian Engstad
$$
\newcommand{\a}[1]{\mathbf{#1}}
\newcommand{\d}[1]{\mathbf{\hat#1}}
\newcommand{\vd}[1]{\mathrm{d}{#1}}
\newcommand{\v}[1]{\mathbf{\underline{#1}}}
\newcommand{\l}[1]{\left\|#1\right\|}
\DeclareMathOperator\arctanh{arctanh}
\newcommand{\inner}[2]{#1\bullet #2}
\newcommand{\dp}[2]{\inner{\a{#1}}{\a{#2}}}
\newcommand{\cp}[2]{\langle\dp{#1}{#2}\rangle}
\newcommand{\ci}[2]{\langle\inner{#1}{#2}\rangle}
\newcommand{\det}[3]{\mathsf{det}\left(#1,\,#2,\,#3\right)}
\newcommand{\abs}[1]{\left|#1\right|}
\newcommand{\jac}[2]{\l{\frac{\partial\;\!{#1}}{\partial\;\!{#2}}}}
$$
# Introduction #
In the previous blog post [#Engstad19], we established the intuition that the GGX distribution arrives naturally as a composition of
cosine-weighting a hemisphere followed by Elliptic Stretch. However, recent papers favor analyzing
distribution of normals on an ellipsoid surface. We attempt to explain this relation.
As a recap, Elliptic Stretch is a transform given by
$$ \d{m} : \Omega \to \Omega, \qquad \d{m} = \d{m}(\d{s}) = \mathsf{normalize}\begin{bmatrix}\alpha_x \hat{s}_x \\ \alpha_y \hat{s}_y \\ \hat{s}_z\end{bmatrix}.$$
## Ellipsoid Map ##
The *Ellipsoid Map* is a mapping of the hemisphere onto an ellipsoid surface. It turns out that the *normal* at the point on the ellipsoid surface
is related to GGX distribution. As a matter of fact, it is exactly the same as the Elliptic Stretch.
To generate the (hemi-)ellipsoid from a hemisphere, we *expand* (since $\alpha_x, \alpha_y \in
\left[0, 1\right]$) the hemisphere
$$ \Omega = \left\{ \d{s} = (\hat{s}_x,\hat{s}_y,\hat{s}_z)^\top \, \left| \,\; \hat{s}_z > 0 \wedge \l{\d{s}} = 1 \right. \right\} \subset \mathcal{S}^2 $$
in both the $x$- and $y$-direction using the transformation
$$\a{p} : \Omega \to \mathbb{R}^3, \qquad \a{p} = \begin{bmatrix}x\\y\\z\end{bmatrix} = \a{p}(\d{s}) =
\begin{bmatrix} \alpha_x^{-1} \hat{s}_x \\ \alpha_y^{-1} \hat{s}_y \\ \hat{s}_z \end{bmatrix}.$$
Notice that
$$ \left(\alpha_x x\right)^2 + \left(\alpha_y y\right)^2 + \left(z\right)^2
= \hat{s}_x^2 + \hat{s}_y^2 + \hat{s}_z^2 = 1, $$
so the ellipsoid has major axises $\frac{1}{\alpha_x}, \frac{1}{\alpha_y}$ and $1$.
We calculate the normal at a point $\a{p}$ on the ellipsoid using the gradient of the implicit function $F(\a{p}) = 0$, i.e.
$$ F(\a{p}) = \alpha_x^2x^2 + \alpha_y^2y^2 + z^2 - 1 = 0.$$
Now, the gradient is
$$ \nabla F = \begin{bmatrix}\frac{\partial F}{\partial x}\\\frac{\partial F}{\partial y} \\ \frac{\partial F}{\partial z}\end{bmatrix} =
\begin{bmatrix}2\alpha_x^2 x\\2\alpha_y^2 y\\2z\end{bmatrix}, $$
and since the gradient is normal to the tangent at $\a{p}$, we can get a normalized normal-vector by
$$ \d{m} = \mathsf{normalize}\begin{bmatrix}2\alpha_x^2 x \\ 2\alpha_y^2 y \\ 2z\end{bmatrix}
= \mathsf{normalize}\begin{bmatrix}\alpha_x \hat{s}_x \\ \alpha_y \hat{s}_y \\ \hat{s}_z\end{bmatrix}.$$
Comparing the above to the Elliptic Stretch, we see that the two methods are indeed identical.
The benefit of going this route is that one can use results from Differential Topology to prove the connection to the GGX distribution.
In particular, instead of relying on the Normalizing Jacobian Determinant formula, we can instead use the Gaussian Curvature
and the connection between the curvature and the Jacobian Determinant of the Gauss map.
This does require a bit more auxillary math than in the previous post, but for completeness, we show how to prove it.
# Proof using Gaussian Curvature
In this section, we will prove that the normals of the Ellipsoid Map correspond to the GGX normal distribution.
## Definition
Given an ellipsoid $\mathcal{E}$ with major radii
$$ A = \frac{1}{\alpha_x}, B = \frac{1}{\alpha_y}, \,\text{and}\, C = 1,$$
i.e., we have an ellipsoid defined by
$$ \mathcal{E} = \left\{ (x,y,z)^\top\in\mathbb{R}^3 \,|\, \alpha_x^2 x^2 + \alpha_y^2 y^2 + z^2 = 1 \right\}. $$
If we differentiate the equation in cartesian coordinates, we get
$$ \alpha_x^2 \; x \; \vd{x} + \alpha_y^2 \; y \; \vd{y} + z \; \vd{z} = 0, $$
which shows that the vector with cartesian coordinates $\a{m} = (\alpha_x^2x, \alpha_y^2y, z)^\top$, is orthogonal
to the tangent plane, and therefore normal to the plane.
Let
$$ \d{m} = \frac{\a{m}}{\l{\a{m}}} = \frac{\left(\alpha_x^2 x, \alpha_y^2 y, z\right)^\top}{\sqrt{\alpha_x^4 x^2 + \alpha_y^4 y^2 + z^2}}
= \frac{\left(\alpha_x \hat{s}_x, \alpha_y \hat{s}_y, \hat{s}_z\right)^\top}{\sqrt{\alpha_x^2 \hat{s}_x^2 + \alpha_y^2 \hat{s}_y^2 + \hat{s}_z^2}}
,$$
and define the support, or ``height'' (the distance from the center to the tangent plane), as
$$ H = \frac{1}{\l{\a{m}}} = \frac{1}{\sqrt{\alpha_x^4 x^2 + \alpha_y^4 y^2 + z^2}}
= \frac{1}{\sqrt{\alpha_x^2 \hat{s}_x^2 + \alpha_y^2 \hat{s}_y^2 + \hat{s}_z^2}} .$$
The Gaussian curvature of an ellipsiod is [#Gray97].
$$ K_G = \frac{H^4}{A^2B^2C^2} = \alpha_x^2\alpha_y^2H^4 = \frac{\alpha_x^2\alpha_y^2}{(\alpha_x^4x^2+\alpha_y^4y^2+z^2)^2}.$$
## Curvature w.r.t. the normal.
Now, this gives the curvature $K_G$ as a function of $\a{p}$. To make it a function of $\d{m}$, notice that
$$ \d{m} = \left(H\cdot\alpha_x^2 x, H\cdot\alpha_y^2 y, H\cdot z\right)^\top, $$
and
$$ \alpha_x^2 x^2+\alpha_y^2 y^2 + z^2 = 1, $$
so
$$ H^2 \alpha_x^2 x^2 + H^2 \alpha_y^2 y^2 + H^2 z^2 = H^2,$$
or
$$ \left(H \alpha_x x \right)^2 + \left(H \alpha_y y\right)^2 + (H z)^2 = H^2,$$
therefore
$$ \left(\frac{\hat{m}_x}{\alpha_x}\right)^2 +
\left(\frac{\hat{m}_y}{\alpha_y}\right)^2 +
\hat{m}_z^2 = H^2,$$
or
$$ H = H(\d{m}) = \left(\frac{\hat{m}_x^2}{\alpha_x^2}+\frac{\hat{m}_y^2}{\alpha_y^2}+\hat{m}_z^2\right)^{1/2},$$
so,
$$ K_G = \alpha_x^2 \alpha_y^2 H^4 = \alpha_x^2\;\alpha_y^2\;\left(\frac{\hat{m}_x^2}{\alpha_x^2}+\frac{\hat{m}_y^2}{\alpha_y^2}+\hat{m}_z^2\right)^2. $$
This result is interesting in itself, because $1/K_G$ resembles the GGX normal-distribution up to a constant factor. But more on that later.
## Jacobian Determinant of Map to Ellipsoid
Let
$$ \a{p} = \a{p}(\d{s}) = \begin{bmatrix}x\\y\\z\end{bmatrix} =
\begin{bmatrix}
\alpha_x^{-1} \hat{s}_x\\
\alpha_y^{-1} \hat{s}_y\\
\hat{s}_z
\end{bmatrix}
, $$
as before. When we apply $\a{p}(\d{s})$, a small area $d\d{s}$ around $\d{s}$ will become a small area $\vd{A}$ around $\a{p}$.
We need to find the Jacobian Determinant
$$J_\a{p} = \jac{\a{p}}{\d{s}}.$$
Now, we can use any parameterization on $\d{s}$, so pick
$$ \d{s} = \begin{bmatrix}s\\t\\\sqrt{1-s^2-t^2}\end{bmatrix},$$
then
$$ \partial_s\d{s} = \begin{bmatrix}1\\0\\ -\frac{s}{\sqrt{1-s^2-t^2}}\end{bmatrix},\qquad
\partial_t\d{s} = \begin{bmatrix}0\\1\\ -\frac{t}{\sqrt{1-s^2-t^2}}\end{bmatrix},
$$
and
$$
\partial_s\d{s} \times \partial_t\d{s} =
\begin{bmatrix}
-\frac{s}{\sqrt{1-s^2-t^2}}\\
-\frac{t}{\sqrt{1-s^2-t^2}}\\
1
\end{bmatrix} = \frac{1}{\hat{s}_z}\begin{bmatrix}-\hat{s}_x\\-\hat{s}_y\\\hat{s}_z\end{bmatrix},
$$
so
$$\l{\partial_s\d{s}\times\partial_t\d{s}} = \frac{1}{\abs{\hat{s}_z}}\left(\hat{s}_x^2 + \hat{s}_y^2 + \hat{s}_z^2\right)^{1/2} = \frac{\l{\d{s}}}{\abs{\hat{s}_z}} = \frac{1}{\abs{\hat{s}_z}},$$
while
$$\begin{align*}
\l{\partial_s\a{p}\times\partial_t\a{p}} &=
\l{\begin{bmatrix}
\alpha_x^{-1} \partial_s\hat{s}_x \\
\alpha_y^{-1} \partial_s\hat{s}_y \\
\partial_s\hat{s}_z
\end{bmatrix}\times
\begin{bmatrix}
\alpha_x^{-1} \partial_t\hat{s}_x \\
\alpha_y^{-1} \partial_t\hat{s}_y \\
\partial_t\hat{s}_z
\end{bmatrix}
} \\ &=
\l{\begin{bmatrix}
\alpha_y^{-1}\left((\partial_s\hat{s}_y) (\partial_t\hat{s}_z) - (\partial_s\hat{s}_z) (\partial_t\hat{s}_y) \right)\\
\alpha_x^{-1}\left((\partial_s\hat{s}_z) (\partial_t\hat{s}_x) - (\partial_s\hat{s}_x) (\partial_t\hat{s}_z) \right)\\
\alpha_x^{-1}\alpha_y^{-1}\left((\partial_s\hat{s}_x) (\partial_t\hat{s}_y) - (\partial_s\hat{s}_y) (\partial_t\hat{s}_x) \right)
\end{bmatrix}
} \\ &=
\l{\begin{bmatrix}
\alpha_y^{-1}\left(\partial_s\d{s}\times\partial_t\d{s}\right)_x\\
\alpha_x^{-1}\left(\partial_s\d{s}\times\partial_t\d{s}\right)_y\\
\alpha_x^{-1}\alpha_y^{-1}\left(\partial_s\d{s}\times\partial_t\d{s}\right)_z
\end{bmatrix}
} \\ &=
\alpha_x^{-1}\alpha_y^{-1}\frac{1}{\abs{\hat{s}_z}}
\l{\begin{bmatrix}
\alpha_x\cdot(-\hat{s}_x)\\
\alpha_y\cdot(-\hat{s}_y)\\
\hat{s}_z
\end{bmatrix}
} \\ &=
\frac{\left(
\alpha_x^2\hat{s}_x^2+
\alpha_y^2\hat{s}_y^2+
\hat{s}_z^2
\right)^{1/2}
}{\alpha_x\alpha_y\abs{\hat{s}_z}}
\end{align*}
$$
so we get
$$\jac{\a{p}}{\d{s}} =
\frac{\l{\partial_s\a{p}\times\partial_t\a{p}}}
{\l{\partial_s\d{s}\times\partial_t\d{s}}} =
\frac{\left(
\alpha_x^2\hat{s}_x^2+
\alpha_y^2\hat{s}_y^2+
\hat{s}_z^2
\right)^{1/2}
}{\alpha_x\alpha_y} = \frac{1}{\alpha_x\alpha_y\cdot H}.$$
## Jacobian of Normal Vector from Ellipsoid Surface
Now, if $\vd{A}$ denotes the area of an infinitesimal region on the surface containing the point $\a{p}$,
then $K_G\;\vd{A}$ is the area of the Gauss-map image of that region [#Upax16]:
$$ \vd{\d{m}} = K_G\;\vd{A}.$$
so $K_G$ acts as the Jacobian for the map that takes points $\a{p}$ on the ellipsoid to the normal space. I.e.:
$$ \jac{\d{m}}{\a{p}} = K_G = \alpha_x^2\alpha_y^2\;H^4.$$
## Proof using Gauss Curvature
So,
$$ \jac{\d{m}}{\a{u}} =
\jac{\d{m}}{\a{p}} \cdot
\jac{\a{p}}{\d{s}} \cdot
\jac{\d{s}}{\a{u}} =
\alpha_x^2\alpha_y^2H^4 \cdot \frac{1}{\alpha_x\alpha_y \cdot H} \cdot \frac{\pi}{\abs{\hat{s}_z}} = \frac{\pi\alpha_x\alpha_yH^3}{\abs{\hat{s}_z}},$$
but notice that
$$\abs{\inner{\d{m}}{\d{n}}} = \abs{\hat{m}_z} = H \abs{\hat{s}_z},$$
so
$$ \jac{\d{m}}{\a{u}} = \frac{\pi\alpha_x\alpha_y\;H^4}{\abs{\hat{m}_z}} = \frac{1}{D(\d{m})\;\abs{\inner{\d{m}}{\d{n}}}},$$
just as in the previous blog-post. So, have proven yet again that the PDF is
$$ p(\d{m}) = p_U(\a{u})\cdot\jac{\d{m}}{\a{u}}^{-1} = D(\d{m})\cdot\abs{\inner{\d{m}}{\d{n}}},$$
since the PDF of the unit square $p_U(\a{u}) = 1.$
# Using the Gauss-map Directly
We noticed earlier that $1/K_G$ as a function of $\d{m}$ is equal to $D(\d{m})$ up to a constant factor.
Define
$$ D(\d{m}) = c\cdot \frac{1}{K_G},$$
where $c$ is the normalization constant we are looking for. Now, since we require
$$ \int_\Omega D(\d{m}) \abs{\inner{\d{m}}{\d{n}}} \vd{\d{m}} = 1,$$
we have
$$ \int_\Omega D(\d{m}) \abs{\inner{\d{m}}{\d{n}}} \vd{\d{m}} = \int_\mathcal{E} c\cdot\frac{1}{K_G} \abs{\inner{\d{m}}{\d{n}}}\cdot K_G \;\vd{A} = c\cdot \int_\mathcal{E} \abs{\inner{\d{m}}{\d{n}}} \;\vd{A} = c \cdot A_\d{n}^\bot(\mathcal{E}), $$
where $A_\d{n}^\bot(\mathcal{E}) = \int_\mathcal{E}\abs{\inner{\d{m}}{\d{n}}} \;\vd{A}$ is the projected area of the surface $\mathcal{E}$ in the direction $\d{n}$. Hence,
$$ c = \frac{1}{A_\d{n}^\bot(\mathcal{E})},$$
In our case, the projected area is simply the area of the ellipse with radii $1/\alpha_x$ and $1/\alpha_y$, which results in
$$ A_\d{n}^\bot(\mathcal{E}) = \pi\cdot\left(\frac{1}{\alpha_x}\right)\cdot\left(\frac{1}{\alpha_y}\right) = \frac{\pi}{\alpha_x\alpha_y},$$
and
$$ D(\d{m}) = \frac{1}{A_\d{n}^\bot(\mathcal{E})}\cdot\frac{1}{K_G} = \frac{1}{\pi\cdot\alpha_x\alpha_y\left(\frac{m_x^2}{\alpha_x^2}+\frac{m_y^2}{\alpha_y^2}+m_z^2\right)^2}. $$
This means that if we can sample points $\a{p}$ on the ellipsoid *uniformly*, then calculating the normal at $\a{p}$ gives us a direction sampled according to the GGX normal distribution.
## Uniform Sampling of an Ellipsoid ##
We have to be quite careful when trying to get a uniform sampling pattern on the Ellipsoid. One method is to generate
points uniformly on the hemisphere, apply the mapping $\a{p}(\d{s})$, and then correct for the distortion by rejecting
samples.
### Rejection Sampling ###
We want to sample from $\a{p}$ uniformly on the ellipsoid, i.e. $p(\a{p}) = 1,$ but instead draw from distribution $\d{s} \sim U(\Omega)$,
followed by the transform $\a{p}$. Thus, the distribution of $\a{p}$ is
$$ q(\a{p}) = \jac{\a{p}}{\d{s}}^{-1} = \alpha_x\alpha_y H(\a{p}) = \frac{\alpha_x\alpha_y}{\sqrt{\alpha_x^4 x^2 + \alpha_y^4 y^2 + z^2}}. $$
We know that $q$ is smallest when $\a{p} = (0, 0, 1)$, when $q = \alpha_x\alpha_y$, in other words
$$\frac{p(\a{p})}{q(\a{p})} = \frac{1}{q(\a{p})} \le \frac{1}{\alpha_x\alpha_y} = M,$$
This means that we draw a random number $\xi \sim U(0, 1)$ and accept samples only when
$$ \xi < \frac{1}{M\cdot q(\a{p})} = \frac{\alpha_x\alpha_y}{q(\a{p})} = \frac{1}{H} = \sqrt{\alpha_x^4 x^2 + \alpha_y^4 y^2 + z^2}. $$
However, this can lead to many rejected samples. For instance, when $\alpha_x = 1$ and $\alpha_y = 0.2$, then $(0, 5, 0)$ is on the
ellopsoid and:
$$ \frac{1}{M\cdot q(0, 5, 0)} = \left(1^4 0^2 + (0.2)^4 5^2 + 0^2\right)^{1/2} = 0.2,$$
so $80\%$ of samples would be rejected.
**Note**: Some authors try to sample the hemisphere uniformly first, and then scale the hemisphere to an ellipsoid without
performing the rejection step. This is wrong and leads to too many samples where $1/q(\a{p})$ is large.
![Figure [no-rej-top]: No rejection sampling](no-rej-top.svg)
![Figure [rej-top]: Rejection sampling](rej-top.svg)
### Normal Distribution ###
A better way is to draw samples using the normal distribution method with standard deviation $1/\alpha_x$, $1/\alpha_y$ and $1$:
$$\begin{align*}\xi_x & \sim \mathcal{N}\left(0, \frac{1}{\alpha_x^2}\right), \\
\xi_y & \sim \mathcal{N}\left(0, \frac{1}{\alpha_y^2}\right), \\
\xi_z & \sim \mathcal{N}\left(0, 1\right),\end{align*}$$
Then calculate
$$d = \sqrt{\alpha_x^2\xi_x^2 + \alpha_y^2\xi_y^2 + \xi_z^2},$$
and use points
$$\a{p} = \frac{1}{d} \begin{bmatrix} \xi_x \\ \xi_y \\ \xi_z \end{bmatrix}.$$
![Figure [nrm-top]: Normal Distribution Sampling](nrm-top.svg)
# References
[#Engstad19]: Engstad, P. "Elliptic Stretch" [Link](./elliptic-stretch.html)
[#Gray97]: Gray, A. "The Ellipsoid" and "The Stereographic Ellipsoid." §13.2 and 13.3 in Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed. Boca Raton, FL: CRC Press, pp. 301-303, 1997
[#Upax16]: Upax, "Proof that the Gaussian Curvature is a Ratio of Areas." [Stackexchange](https://math.stackexchange.com/questions/1198377/proof-that-the-gaussian-curvature-is-a-ratio-of-areas)