**Elliptic Stretch**
Pål-Kristian Engstad
$$
\newcommand{\a}[1]{\mathbf{#1}}
\newcommand{\d}[1]{\mathbf{\hat#1}}
\newcommand{\vd}[1]{\mathrm{d}{#1}}
\newcommand{\v}[1]{\mathbf{\underline{#1}}}
\newcommand{\s}[1]{\mathbf{\tilde{#1}}}
\newcommand{\l}[1]{\left\|#1\right\|}
\DeclareMathOperator\arctanh{arctanh}
\newcommand{\inner}[2]{#1\bullet #2}
\newcommand{\dp}[2]{\inner{\a{#1}}{\a{#2}}}
\newcommand{\cp}[2]{\langle\dp{#1}{#2}\rangle}
\newcommand{\ci}[2]{\langle\inner{#1}{#2}\rangle}
\newcommand{\det}[3]{\mathsf{det}\left(#1,\,#2,\,#3\right)}
\newcommand{\abs}[1]{\left|#1\right|}
\newcommand{\jac}[2]{\l{\frac{\partial\;\!{#1}}{\partial\;\!{#2}}}}
$$
# Introduction
Many, if not most, real-time implementations of Physically Based Rendering, use the GGX distribution as the
underlying model for the distribution of micro-facet normals.
A surprising fact about this distribution is that it emerges directly from the Elliptic Stretch of a Cosine-Weighted Hemisphere.
Heitz noted this elegant conceptual model [#Heitz17], but similar ideas are present in Walter et al.'s work on Ellipsoid Normal
Distribution Function [#Walter14], and before that, Trowbridge and Reitz [#TrReitz75] introduced it in the physics literature.
While these studies are focusing on normals on an ellipsoid surface, we will focus on a more straightforward concept,
the Elliptic Stretch.
The Elliptic Stretch bends micro-facet normals towards the geometry normal according to the roughness
parameters $\alpha_x$ and $\alpha_y$:
$$\d{m} = \mathsf{normalize}\begin{bmatrix}\alpha_x\cdot \hat{s}_x\\\alpha_y\cdot \hat{s}_y\\\hat{s}_z\end{bmatrix},$$
where $\d{s}$ is are unit-length vectors on the hemisphere (we use the notation $\d{x}$ for unit-length vectors).
The Cosine-Weighted Hemisphere are unit-length vectors parameterized over $\a{u} = (u, v)$ on the unit-square:
$$\d{s} = \begin{bmatrix}\sqrt{u}\cdot\cos\left(2\pi v\right)\\\sqrt{u}\cdot\sin\left(2\pi v\right)\\\sqrt{1-u}\end{bmatrix}.$$
What is not so obvious is that combining the cosine-weighted hemisphere with the elliptic stretch corresponds to the prescribed
probability density function
$$ p(\d{m}) = D(\d{m})\cdot(\inner{\d{m}}{\d{n}}), $$
using the GGX normal distribution
$$ D(\d{m}) = \frac{1}{\pi\cdot\alpha_x\alpha_y\left(\frac{\hat{m}_x^2}{\alpha_x^2} + \frac{\hat{m}_y^2}{\alpha_y^2} + \hat{m}_z^2\right)^2}.$$
While there is a general explanation in [#Walter14], and one can confirm it by numerical analysis (e.g., in Matlab), we are not aware of a direct
and "simple" proof. We will attempt it in this note.
# Mathematical Tools
In this section, we will establish mathematical tools in order to work with probability density functions and Jacobian determinants
related to the transforms given in the introduction. If this is already familiar, feel free to skip ahead to Analyzing the Normalizing Function.
## The Jacobian Determinant
The reason the Jacobian Determinant is important is due to the Theorem of Change of Variables:
$$ \int_A f(\a{x}) \; d\a{x} =
\int_U f(\a{x}(\a{u})) \jac{\a{x}}{\a{u}} \vd{\a{u}},\,\text{where}\, U=\a{x}^{-1}(A), $$
where $A\in\mathbb{R}^n$ is the integration domain for $\a{x}$, $\vd{\a{x}}$ is the $n$-dimensional Cartesian measure
on $\mathbb{R}^n$, $\a{x}(\a{u})$ is a one-to-one injection from $\a{u}$-space into $\a{x}$-space, and $\jac{\a{x}}{\a{u}}$
is the Jacobian determinant.
The above is similar to the one-dimensional change of variable (integration by substitution):
$$ \int_{a}^{b} f(x)\, dx = \int_{\phi^{-1}(a)}^{\phi^{-1}(b)} f(\phi(u))\, \phi'(u)\,du,$$
used in Calculus.
In Probability Theory, when we have a transformation of random variables $\a{Y} = \a{y}(\a{X})$ with given density $p_\a{X}(\a{x})$,
and with the inverse $\a{X} = \a{y}^{-1}(\a{Y}) = \a{x}(\a{Y}).$ Then
$$p_\a{Y}(\a{y})\;\vd{\a{y}} = p_\a{X}(\a{x})\;\vd{\a{x}} = p_\a{X}(\a{x})\;\jac{\a{x}}{\a{y}} \vd{\a{y}}, $$
so
$$p_\a{Y}(\a{y}) = p_\a{X}(\a{x}) \; \jac{\a{x}}{\a{y}} = p_\a{X}(\a{x})\; \jac{\a{y}}{\a{x}}^{-1}.$$
Notice that when $p_\a{X}(\a{x}) = 1$, that
$$ p_\a{Y}(\a{y}) = \jac{\a{x}}{\a{y}} = \jac{\a{y}}{\a{x}}^{-1},$$
so taking the reciprocal of the Jacobian determinant of a transform based on a normalized uniform distribution provides us the post-transform
probability density function.
### 2D transformations
Let $x = x(u, v)$, and $y = y(u, v)$, or
$$\a{x} = \a{x}(\a{u}) = \begin{bmatrix}x(u, v)\\y(u, v)\end{bmatrix}.$$
Then, if $u$ changes by $\vd{u}$, then $x$ changes by $(\partial x)/(\partial u)\vd{u}$ and $y$ changes by
$(\partial y/\partial u)\vd{u}$, and similarly for $v$. Thus a small rectangle in the $(u, v)$-plane corresponds to
a small parallelogram in the $(x, y)$-plane with sides
$$ \a{a} = \begin{bmatrix}\frac{\partial x}{\partial u} \vd{u} \\ \frac{\partial y}{\partial u} \vd{u} \end{bmatrix} = \begin{bmatrix}\frac{\partial x}{\partial u} \\ \frac{\partial y}{\partial u} \end{bmatrix}\vd{u}
= \frac{\partial \a{x}}{\partial u} \vd{u},$$
and,
$$ \a{b} = \begin{bmatrix}\frac{\partial x}{\partial v} \vd{v} \\ \frac{\partial y}{\partial v} \vd{v} \end{bmatrix} = \begin{bmatrix}\frac{\partial x}{\partial v} \\ \frac{\partial y}{\partial v} \end{bmatrix}\vd{v}
= \frac{\partial \a{x}}{\partial v} \vd{v},$$
The area of the parallelogram is
$$ A = \abs{\mathsf{det}(\a{a}, \a{b})} =
\abs{
\left(\frac{\partial x}{\partial u} \vd{u} \cdot \frac{\partial y}{\partial v} \vd{v}\right) -
\left(\frac{\partial y}{\partial u} \vd{u} \cdot \frac{\partial x}{\partial v} \vd{v}\right)
} =
\abs{
\frac{\partial x}{\partial u} \frac{\partial y}{\partial v} -
\frac{\partial y}{\partial u} \frac{\partial x}{\partial v}
} \cdot \abs{\vd{u}\cdot\vd{v}}.$$
Now, since $\abs{\vd{u}\cdot\vd{v}}$ is the area of the small rectangle in the $(u,v)$-plane, the first factor represents the
magnification factor in the $(x, y)$-plane. We call this the Jacobian determinant, since
$$
J = \abs{\frac{\partial x}{\partial u} \frac{\partial y}{\partial v} -
\frac{\partial y}{\partial u} \frac{\partial x}{\partial v}} =
\abs{\mathsf{det}\begin{bmatrix}
\frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \\
\frac{\partial y}{\partial u} & \frac{\partial y}{\partial v}
\end{bmatrix}} = \left\|\begin{matrix}
\frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \\
\frac{\partial y}{\partial u} & \frac{\partial y}{\partial v}
\end{matrix}\right\|.$$
### Parameterized Surfaces
Let a surface in 3 dimensions be parameterized by $u$ and $v$, in other words
$$\a{x} = \a{x}(\a{u}) = \begin{bmatrix}x(u, v)\\y(u, v)\\z(u, v)\end{bmatrix},$$
then
$$ \a{a} = \frac{\partial\a{x}}{\partial u} \vd{u} = \begin{bmatrix}\frac{\partial x}{\partial u} \\ \frac{\partial y}{\partial u} \\ \frac{\partial z}{\partial u} \end{bmatrix}\vd{u},\,\text{and}\,
\,\a{b} = \frac{\partial\a{x}}{\partial v} \vd{v} = \begin{bmatrix}\frac{\partial x}{\partial v} \\ \frac{\partial y}{\partial v} \\ \frac{\partial z}{\partial v} \end{bmatrix}\vd{v},$$
and the surface area can be calculated from
$$ A = \l{ \a{a} \times \a{b} } = \l{\frac{\partial \a{x}}{\partial u} \times \frac{\partial \a{x}}{\partial v}} \cdot \abs{\vd{u}\cdot\vd{v}},$$
so the Jacobian determinant for the surface is defined as
$$ J = \jac{\a{x}}{\a{u}} = \l{\frac{\partial \a{x}}{\partial u} \times \frac{\partial \a{x}}{\partial v}}. $$
(Note, that this is a slight abuse of notation, in that this is not equal to $\abs{\frac{\partial(x,y,z)}{\partial(u,v)}}$, since
this would be a determinant of a $3\times 2$-matrix, which makes no sense. We believe that it should be evident from context what form
of the Jacobian we mean.)
### Notation
The Jacobian determinant has a few different notations, including $J_\mathsf{f}$ to indicate the transform in question, as well as
$$ J = \abs{\frac{\partial(x, y)}{\partial(u, v)}}.$$
We will use the notation
$$ J = \jac{\a{x}}{\a{u}}, $$
since it resembles the prior and because it indicates that it is the *magnification factor* of the $\a{x}$-space relative to the $\a{u}$-space:
$$ \frac{\abs{\vd{A_\a{x}}}}{\abs{\vd{A_\a{u}}}} = J = \jac{\a{x}}{\a{u}}.$$
## Analyzing the Normalizing Function
The Elliptic Stretch uses the normalizing function:
$$ \d{p} = \mathsf{normalize}(\a{p}) = \frac{\a{p}}{\l{\a{p}}}.$$
We want to analyze this transformation. In other words, how do we differentiate it, and what is the Jacobian Determinant of the transformation?
### Differentiation
Assuming $\a{p}$ is a function of some other parameters, how does a small change of the input parameters change the
normalized vector?
Let $\lambda = \l{\a{p}} = (\inner{\a{p}}{\a{p}})^{1/2}$. Then
$$\a{p} = \lambda \d{p}.$$
Using the product rule $(fg)' = f'g + fg'$ in differentials notation:
$$ \mathrm{d}\a{p} = \mathrm{d}(\lambda \d{p}) = (\mathrm{d}\lambda) \d{p} + \lambda (\mathrm{d}\d{p}),$$
we get the division rule
$$ \mathrm{d}\d{p} =
\frac{1}{\lambda} \; \left(\mathrm{d}\a{p} - \mathrm{d}\lambda\;\d{p}\right) =
\frac{1}{\lambda} \; \left(\mathrm{d}\a{p} - \mathrm{d}\lambda\;\frac{\a{p}}{\lambda}\right) =
\frac{1}{\lambda^2} \left(\lambda \; \mathrm{d}\a{p} - \mathrm{d}\lambda \; \a{p}\right).$$
Now,
$$ \mathrm{d}\lambda = \mathrm{d}(\l{\a{p}}) = \mathrm{d}\left[\left(\inner{\a{p}}{\a{p}}\right)^{1/2}\right]
= \frac{\mathrm{d}(\inner{\a{p}}{\a{p}})}{2\;\left(\inner{\a{p}}{\a{p}}\right)^{1/2}}
= \frac{2\;\inner{\a{p}}{\mathrm{d}\a{p}}}{2\;\left(\inner{\a{p}}{\a{p}}\right)^{1/2}}
= \frac{1}{\lambda}\;\left(\inner{\a{p}}{\mathrm{d}\a{p}}\right),
$$
which results in
$$ \mathrm{d}\d{p} = \frac{1}{\lambda^3} \left[\lambda^2 \mathrm{d}\a{p} - (\inner{\a{p}}{\mathrm{d}\a{p}})\;\a{p}\right],
$$
or using $\lambda = \l{\a{p}}$ and $\lambda^2 = \inner{\a{p}}{\a{p}}$:
$$ \mathrm{d}\d{p} = \frac{1}{\l{\a{p}}^3} \left[(\inner{\a{p}}{\a{p}})\; \mathrm{d}\a{p} - (\inner{\a{p}}{\mathrm{d}\a{p}})\;\a{p}\right].$$
Now, using the triple product formula $\a{a}\times(\a{b}\times\a{c}) = (\inner{\a{a}}{\a{c}})\a{b} - (\inner{\a{a}}{\a{b}})\a{c}$,
we get
$$ \mathrm{d}\d{p} = \frac{\a{p}\times(\mathrm{d}\a{p}\times\a{p})}{\l{\a{p}}^3}.$$
### Jacobian Determinant
If $\a{p}$ is a parameterized function of two variables, we have a surface $\d{p}(\a{u})$, so
$$ J = \jac{\d{p}}{\a{u}} = \l{\frac{\partial\d{p}}{\partial u} \times \frac{\partial\d{p}}{\partial v}}.$$
Now,
$$\frac{\partial\d{p}}{\partial u} = \d{p}_u = \mathrm{d}_u\d{p} = \frac{\a{p}\times(\mathrm{d}_u\a{p}\times\a{p})}{\l{\a{p}}^3},$$
$$\frac{\partial\d{p}}{\partial v} = \d{p}_v = \mathrm{d}_v\d{p} = \frac{\a{p}\times(\mathrm{d}_v\a{p}\times\a{p})}{\l{\a{p}}^3},$$
so using the identity (see below for the derivation)
$$
\left[\a{a}\times(\a{b}\times \a{a})\right]\times\left[\a{a}\times(\a{c}\times \a{a})\right]
= (\a{a}\bullet\a{a}) (\a{a}\bullet(\a{b}\times\a{c})) \a{a},
$$
we get
$$
\begin{align*}
\d{p}_u\times\d{p}_v &= \frac{1}{\l{\a{p}}^6} \left[\a{p}\times(\a{p}_u\times \a{p})\right]\times\left[\a{p}\times(\a{p}_v\times \a{p})\right]\\
&= \frac{1}{\l{\a{p}}^6} (\a{p}\bullet\a{p}) \left(\a{p}\bullet(\a{p}_u\times\a{p}_v)\right) \a{p},
\end{align*}$$
so, since $\l{\a{p}}^2 = \a{p}\bullet\a{p},$ and $\l{c\a{p}} = \abs{c}\l{\a{p}},$
$$
\l{\d{p}_u\times\d{p}_v} =
\frac{1}{\l{\a{p}}^6}\cdot\l{\a{p}}^2\cdot \left|\a{p}\bullet\left(\a{p}_u\times\a{p}_v\right)\right|\cdot\l{\a{p}} =
\frac{ \left| \det{\a{p}}{\a{p}_u}{\a{p}_v} \right| } {\l{\a{p}}^3},
$$
so we get the *Normalizing Jacobian Determinant formula*:
$$
\boxed{
J_\mathsf{normalize} = \jac{\d{p}}{\a{u}} = \frac{1}{\l{\a{p}}^3}\cdot\left|\a{p}\bullet\left(\a{p}_u\times\a{p}_v\right)\right|.
}
$$
#### Derivation
We use the identity $\left(\a{a}\times\a{b}\right)\times\left(\a{a}\times\a{c}\right) = \left(\a{a}\bullet\left(\a{b}\times\a{c}\right)\right)\a{a}$ twice:
$$\begin{align*}
\left[\a{a}\times(\underline{\a{b}\times \a{a}})\right]\times\left[\a{a}\times(\underline{\a{c}\times \a{a}})\right]
&= \left(\a{a}\bullet\left[\left(\a{b}\times\a{a}\right)\times\left(\a{c}\times\a{a}\right)\right]\right) \a{a} \tag{$\mathsf{identity}$} \\
&= \left(\a{a}\bullet\left[\underline{\left(\a{a}\times\a{b}\right)\times\left(\a{a}\times\a{c}\right)}\right]\right) \a{a} \tag{$\a{p}\times\a{q}=-\a{q}\times\a{p}$} \\
&= \left(\a{a}\bullet\left[\left(\underline{\a{a}\bullet(\a{b}\times\a{c})}\right) \a{a} \right]\right) \a{a} \tag{$\mathsf{identity}$}\\
&= \left(\a{a}\bullet(\a{b}\times\a{c})\right) \left(\a{a}\bullet\a{a}\right) \a{a} \tag{$\a{p}\bullet k\a{q}=k(\a{p}\bullet\a{q})$} \\
\end{align*}$$
# Proof
As mentioned in the introduction, we are going to prove that sampling from a cosine-weighted hemisphere
followed by the elliptic stretch result in the GGX normal-distribution function.
## Cosine-Weighted Half-Sphere
Define the map from $\a{u} = (u, v)\in\left[0,\,1\right]\times\left[0,\,1\right] \subset \mathbb{R}^2$, by:
$$ \d{s} = \d{s}(u, v) = \begin{bmatrix}\hat{s}_x\\ \hat{s}_y \\ \hat{s}_z\end{bmatrix}
= \begin{bmatrix}\sqrt{u}\cdot\cos(2\pi v)\\ \sqrt{u}\cdot\sin(2\pi v)\\ \sqrt{1-u}\end{bmatrix}.
$$
Note that $\d{s}\in\mathbb{S}^2$, since
$$\l{\d{s}}^2 =
\left(\sqrt{u}\cdot\cos(2\pi v)\right)^2 +
\left(\sqrt{u}\cdot\sin(2\pi v)\right)^2 +
\left(\sqrt{1-u}\right)^2
= u + \left(1-u\right) = 1,$$
and
$$ \partial_u\d{s} = \frac{1}{2}\cdot\begin{bmatrix}\frac{1}{\sqrt{u}}\cos(2\pi v)\\\frac{1}{\sqrt{u}}\sin(2\pi v)\\-\frac{1}{\sqrt{1-u}}\end{bmatrix},\quad
\partial_v\d{s} = 2\pi\cdot\begin{bmatrix}-\sqrt{u}\sin(2\pi v)\\\sqrt{u}\cos(2\pi v)\\0\end{bmatrix},$$
so taking the cross-product
$$ \partial_u\d{s}\times\partial_v\d{s} = \frac{\pi}{\sqrt{1-u}} \cdot \begin{bmatrix}
\sqrt{u}\cos(2\pi v)\\
\sqrt{u}\sin(2\pi v)\\
\sqrt{1-u}
\end{bmatrix}
= \frac{\pi}{\hat{s}_z} \d{s}.$$
This shows that the Jacobian determinant
$$\jac{\d{s}}{\a{u}} = \l{\partial_u\d{s}\times\partial_v\d{s}} = \frac{\pi}{|\hat{s}_z|} \cdot \l{\d{s}} = \frac{\pi}{|\hat{s}_z|} = \frac{\pi}{|\cos\theta|}.$$
In other words, if we sample the unit-square uniformly, then $\d{s}(\a{u})$ is a cosine-weighted sampling method with PDF
$$p(\d{s}) = \frac{1}{\pi}\cdot \abs{\cos\theta} = \frac{1}{\pi}\cdot\abs{\inner{\d{s}}{\d{n}}}.$$
## Elliptic Stretch
Let the elliptic stretch be
$$\d{m} = \mathsf{normalize}\left(\v{m}\right) = \frac{1}{\l{\v{m}}}\cdot\v{m},\quad\text{where}\,\v{m} = \begin{bmatrix}\alpha_x\cdot \hat{s}_x\\\alpha_y\cdot \hat{s}_y\\\hat{s}_z\end{bmatrix},$$
moreover, as before, $\d{s}$ is the cosine-weighted sample with Jacobian determinant
$$\jac{\d{s}}{\a{u}} = \frac{\pi}{|\hat{s}_z|}.$$
Then, since this is a normalizing transformation, the Jacobian determinant
$$
\jac{\d{m}}{\a{u}} = \frac{\det{\v{m}}{\v{m}_u}{\v{m}_v}}{\l{\v{m}}^3}
=\frac{\left|\begin{matrix}\alpha_x\cdot\hat{s}_x & \alpha_y\cdot\hat{s}_y & \hat{s}_z \\
\alpha_x\cdot\partial_u\hat{s}_x & \alpha_y\cdot\partial_u\hat{s}_y & \partial_u\hat{s}_z \\
\alpha_x\cdot\partial_v\hat{s}_x & \alpha_y\cdot\partial_v\hat{s}_y & \partial_v\hat{s}_z
\end{matrix}\right|}{\l{\v{m}}^3}
= \frac{\alpha_x\alpha_y\det{\d{s}}{\d{s}_u}{\d{s}_v}}{\l{\v{m}}^3}
$$
and since $\d{s}\in\mathbb{S}^2$, and $\d{s}_u\times\d{s}_v = \frac{\pi}{|\hat{s}_z|} \d{s}$,
$$
\det{\d{s}}{\d{s}_u}{\d{s}_v} =
\d{s}\bullet(\d{s}_u\times\d{s}_v) =
\frac{\pi}{|\hat{s}_z|} \left(\d{s}\bullet\d{s}\right) =
\frac{\pi}{|\hat{s}_z|},
$$
so
$$ \jac{\d{m}}{\a{u}} = \frac{\pi\cdot\alpha_x\alpha_y}{|\hat{s}_z|\cdot \l{\v{m}}^3} $$
The above is expressed in $\v{m}$-space, so to bring it to the normalized $\d{m}$-space, note that
$$\hat{m}_x = \alpha_x \cdot \frac{\hat{s}_x}{\l{\v{m}}},\quad
\hat{m}_y = \alpha_y \cdot \frac{\hat{s}_y}{\l{\v{m}}},\quad
\hat{m}_z = \frac{\hat{s}_z}{\l{\v{m}}},$$
and
$$
\frac{\hat{m}_x^2}{\alpha_x^2} + \frac{\hat{m}_y^2}{\alpha_y^2} + \hat{m}_z^2 =
\frac{\hat{s}_x^2+\hat{s}_y^2+\hat{s}_z^2}{\l{\v{m}}^2} = \frac{1}{\l{\v{m}}^2}
$$
so, noting that $\abs{\hat{s}_z} = \abs{\hat{m}_z}\cdot\l{\v{m}}$,
$$\jac{\d{m}}{\a{u}} = \frac{\pi\alpha_x\alpha_y}{|\hat{m}_z|\cdot\l{\v{m}}^4} =
\frac{\pi\alpha_x\alpha_y}{|\hat{m}_z|}\cdot \left(
\frac{\hat{m}_x^2}{\alpha_x^2} +
\frac{\hat{m}_y^2}{\alpha_y^2} +
\hat{m}_z^2\right)^2 = \frac{1}{|\hat{m}_z| \cdot D(\d{m})},
$$
where $D(\d{m})$ is the GGX normal-distribution function
$$
D(\d{m}) = \frac{1}{\pi\cdot\alpha_x\alpha_y\left(
\frac{\hat{m}_x^2}{\alpha_x^2}+
\frac{\hat{m}_y^2}{\alpha_y^2}+
\hat{m}_z^2
\right)^2},
$$
and hence, by taking the reciprocal,
$$ \jac{\d{m}}{\a{u}}^{-1} = D(\d{m})\cdot(\inner{\d{m}}{\d{n}}).$$
In other words,
$$ p(\d{m}) = p_U(\a{u})\cdot\jac{\d{m}}{\a{u}}^{-1} = D(\d{m})\cdot(\inner{\d{m}}{\d{n}}),$$
since the PDF of the unit square $p_U(\a{u}) = 1.$
This concludes the proof.
# Other Results
We finally note some facts about this result.
## Relation to slope-space
The slope-space is defined by the map
$$ \s{m} = \begin{bmatrix}\tilde{m}_x\\\tilde{m}_y\\1\end{bmatrix} = \mathsf{slope}(\d{m}) = \frac{1}{\hat{m}_z} \begin{bmatrix} \hat{m}_x \\ \hat{m}_y \\ \hat{m}_z \end{bmatrix}. $$
Inversely,
$$ \d{m} = \mathsf{slope}^{-1}(\s{m}) = \mathsf{normalize}\begin{bmatrix}\tilde{m}_x\\\tilde{m}_y\\1\end{bmatrix} = \frac{1}{\sqrt{\tilde{m}_x^2 + \tilde{m}_y^2 + 1}} \; \begin{bmatrix} \tilde{m}_x \\ \tilde{m}_y \\ 1 \end{bmatrix}. $$
Notice that
$$ \hat{m}_z = \frac{1}{\sqrt{\tilde{m}_x^2+\tilde{m}_y^2+1}} = \l{\s{m}}^{-1},$$
so, by using the Normalizing Jacobian Determinant formula:
$$\jac{\d{m}}{\s{m}} = \frac{1}{\l{\s{m}}^3} \cdot \abs{\s{m}\bullet\left(\frac{\partial\s{m}}{\partial\tilde{m}_x}\times\frac{\partial\s{m}}{\partial\tilde{m}_y}\right)} = \l{\s{m}}^{-3} = \hat{m}_z^3.$$
i.e.
$$ J_{\mathsf{slope}^{-1}} = \frac{1}{J_\mathsf{slope}} =
\frac{1}{\left(\tilde{m}_x^2 + \tilde{m}_y^2 + 1\right)^{3/2}} = \hat{m}_z^3. $$
A stretch in slope-space corresponds to the ellipsoid stretch:
$$\mathsf{normalize}\begin{bmatrix}\alpha_x\;\tilde{m}_x\\\alpha_y\;\tilde{m}_y\\1\end{bmatrix} =
\mathsf{normalize}\begin{bmatrix}\alpha_x\;\frac{\hat{m}_x}{\hat{m}_z}\\
\alpha_y\; \frac{\hat{m}_y}{\hat{m}_z}\\1\end{bmatrix} =
\mathsf{normalize}\begin{bmatrix}\alpha_x\;\hat{m}_x\\
\alpha_y\; \hat{m}_y\\ \hat{m}_z\end{bmatrix}.$$
We can write $D$ in slope-space by direct substitution as
$$ D(\s{m}) = \frac{\left(\tilde{m}_x^2+\tilde{m}_y^2+1\right)^2}
{\pi\cdot\alpha_x\alpha_y\left(
\frac{\tilde{m}_x^2}{\alpha_x^2} +
\frac{\tilde{m}_y^2}{\alpha_y^2} +
1
\right)^2
}.$$
By using $\cos\theta_m = \hat{m}_z = 1/\sqrt{\tilde{m}_x^2+\tilde{m}_y^2+1}$:
$$ D(\s{m})
=
\frac{1}
{\pi\cdot\alpha_x\alpha_y\left(
\frac{\tilde{m}_x^2}{\alpha_x^2} +
\frac{\tilde{m}_y^2}{\alpha_y^2} +
1
\right)^2
}\cdot\frac{1}{\hat{m}_z^4}
= \frac{P^{22}(\s{m})}{\hat{m}_z^4},$$
where the first factor $P^{22}(\s{m})$ is the distribution of slopes of the micro-surfaces.
This aligns nicely with the Theorem of Change of Variables:
$$ P^{22}(\s{m}) = p(\d{m}(\s{m}))\;\jac{\d{m}}{\s{m}} = D(\d{m}(\s{m}))\;\hat{m}_z\cdot\hat{m}_z^3 = D(\s{m})\;\hat{m}_z^4.$$
## Isotropic roughness
For isotropic roughness, $\alpha_x = \alpha_y = \alpha$, and we have
$$ D(\d{m}) = \frac{1}{\pi\cdot\alpha^2\left(\frac{\hat{m}_x^2+\hat{m}_y^2}{\alpha^2}+\hat{m}_z^2\right)^2}
= \frac{\alpha^2}{\pi\cdot\alpha^4\left(\frac{1-\hat{m}_z^2}{\alpha^2}+\hat{m}_z^2\right)^2}
= \frac{\alpha^2}{\pi\left(1-\hat{m}_z^2+\alpha^2\hat{m}_z^2\right)^2},$$
or
$$ D(\d{m}) = \frac{\alpha^2}{\pi\left(\cos^2\theta_m(\alpha^2-1)+1\right)^2}.$$
## GGX composition
While the GGX distribution arrives as the result of Cosine Sampling followed by the Elliptic Stretch, we
should not think that the Elliptic Stretch alone is sufficient. If we look at
$$ \jac{\d{m}}{\a{u}} = \jac{\d{m}}{\d{s}}\cdot\jac{\d{s}}{\a{u}}, $$
then the Jacobian for the Elliptic Stretch is
$$ \jac{\d{m}}{\d{s}} = \jac{\d{m}}{\a{u}}\cdot\jac{\d{s}}{\a{u}}^{-1} =
\frac{\pi\alpha_x\alpha_y}{\abs{\hat{s}_z}\cdot\l{\v{m}}^3}\cdot \frac{\abs{\hat{s}_z}}{\pi} =
\frac{\alpha_x\alpha_y}{\l{\v{m}}^3}. $$
Since $\v{m}$ is a function of $\d{s}$, we get
$$ \jac{\d{m}}{\d{s}} =
\frac{\alpha_x\alpha_y}{\l{\v{m}}^3} = \frac{\alpha_x\alpha_y}{\left(\alpha_x^2\hat{s}_x^2+\alpha_y^2\hat{s}_y^2+\hat{s}_z^2\right)^{3/2}}.$$
We can also express it it post-scaled $\d{m}$-space, using
$$ \l{\v{m}} = \frac{1}{\sqrt{\frac{\hat{m}_x^2}{\alpha_x^2} + \frac{\hat{m}_y^2}{\alpha_y^2} + \hat{m}_z^2}},$$
or
$$ \jac{\d{m}}{\d{s}} = \alpha_x\alpha_y\left(\frac{\hat{m}_x^2}{\alpha_x^2}+\frac{\hat{m}_y^2}{\alpha_y^2} + \hat{m}_z^2\right)^{3/2}.$$
As can be seen, this is *not* exactly related to the GGX normal-distribution. The closest we get by using the normal-distribution $D$ is:
$$
\jac{\d{m}}{\d{s}} =
\frac{1}{
D(\d{m})\cdot\pi\cdot\sqrt{
\frac{\hat{m}_x^2}{\alpha_x^2}+\frac{\hat{m}_y^2}{\alpha_y^2} + \hat{m}_z^2
}
}
= \frac{\l{\v{m}}}{\pi\cdot D(\d{m})}$$
In other words, we have to be careful when using the GGX normal-distribution by composing it with something else but the normal Cosine Weighted sampling around the
geometry normal. If we wanted to, say cosine-weight it towards a different direction $\d{\omega}$, then the Jacobian would be
$$ \jac{\d{m}}{\a{u}} = J_\mathsf{stretch} \cdot J_{\mathsf{cosine-\omega}}
= \frac{\l{\v{m}}}{\pi\cdot D(\d{m})}\cdot \frac{\pi}{\abs{\inner{\d{m}}{\d{\omega}}}},
$$
so the PDF would be
$$ p(\d{m}) = \frac{\abs{\inner{\d{m}}{\d{\omega}}}D(\d{m})}{\l{\v{m}}},$$
though obviously care has to be taken to make sure the integration domains match.
## Visualization
Below we show how how a $10 \times 10$ uniform square grid in the $\a{u}$-plane projects to the hemisphere using $p(\d{m}(\a{u}))$.
![Figure [ggx-top]: Equal area GGX ($\alpha_x = 0.15, \alpha_y = 0.5$)](ggx-top.svg) ![Figure [ggx-side]: GGX equal area from the side](ggx-side.svg)
![Figure [d-top]: PDF from top](ggx-d-top.png) ![Figure [d-side]: PDF from side](ggx-d-side.png)
# References
[#TrReitz75]: Trowbridge, T. S., and Reitz, K. P. 1975. "Average irregularity representation of a rough surface for ray reflection." J. Opt. Soc. Am. 65, 5 (May), 531–536.
[#Walter14]: Walter, B., et al. "The Ellipsoid Normal Distribution Function." November 2014.
[#Heitz17]: Eric Heitz. "A Simpler and Exact Sampling Routine for the GGX Distribution of Visible Normals. [Research Report]". Unity Technologies. 2017. hal-01509746