This is the fourth note in a series of notes focused on zeros of Dirichlet series, and in particular on Dirichlet series not in the Selberg class. I will refer to the the first, second, and third earlier notes in this series. $\DeclareMathOperator{\Re}{Re}$ $\DeclareMathOperator{\Im}{Im}$
Recall that we study Dirichlet series in the extended Selberg class $\widetilde{S}$, which we write as \begin{equation*} L(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}. \end{equation*} Each such Dirichlet series $L$ has a functional equation of shape $s \mapsto 1  s$, is assumed to be nontrivial, satisfies a bound of Ramanujan–Petersson type on average, has analytic continuation to an entire function of finite order, and satisfies a functional equation of the shape \begin{equation*} \Lambda(s) := L(s) Q^s \prod_{\nu = 1}^N \Gamma(\alpha_\nu s + \beta_\nu) = \omega \overline{\Lambda(1  \overline{s})}. \end{equation*} It will be convenient to let $\Delta(s) = \prod \Gamma(\alpha_\nu s + \beta_\nu)$ refer to the collected gamma factors. We define the degree of $L(s)$ to be^{1} ^{1}with typical $L$functions, this counts the number $\Gamma_\mathbb{R}$ functions in the functional equation (or twice the $\Gamma_\mathbb{C}$ gamma functions).
\begin{equation*} d_L = 2 \sum_{\nu = 1}^N \alpha_\nu. \end{equation*} In principle this note is for general $d_L$, but the primary theorem is for $d_L = 2$, applying for example to Dirichlet series associated to $\mathrm{GL}(2)$ type modular forms.
And recall that we do not assume that $L$ has an Euler product.
Counting $c$values
In this note, we will study solutions to the equation $L(s) = c$ for various $c$. We call roots of the equation $L(s) = c$ the $c$values of $L$, and we'll denote a generic $c$value by $\rho_c = \beta_c + i \gamma_c$. If $c = 0$, it is very common to omit the $c$ from this notation, and denote the $0$values (or zeros) as $\rho = \beta + i \gamma$. (It is also common to write $0$values as $\rho = \sigma + it$).
The Riemann Hypothesis predicts that all nontrivial zeros of $\zeta(s)$ lie on the line $\Re s = \tfrac{1}{2}$. Levinson^{2} ^{2}Norman Levinson. Almost all roots of $\zeta(s) = a$ are arbitrarily close to $\sigma = 1/2$. Proceedings of the National Academy of Sciences, 1975. shows further that all but $O(N(T)(\log \log T)^{1})$ of the roots to $\zeta(s) = c$ in $T < \mathrm{Im} s < 2T$ lie in within \begin{equation*} \lvert \Re s  \tfrac{1}{2} \rvert < \frac{(\log \log T)^2}{\log T}. \end{equation*}
Morally, everything of interest occurs right near the critical line.
The primary theorem of this note codifies a similar statement for zeros of Dirichlet series $L$ with degree $d_L = 2$.
Let $L(s) \in \widetilde{S}$ be a Dirichlet series in the extended Selberg class with $a(1) = 1$ and $d_L = 2$. Then for any $\epsilon > 0$, $L(s)$ has $O_\epsilon(T)$ zeros $\rho = \sigma + it$ with $\lvert \sigma  \tfrac{1}{2} \rvert > \epsilon$. Hence asymptotically, onehundred percent of the zeros of height up to $T$ lie within $\epsilon$ of the critical line.
The assumption that $a(1) = 1$ here isn't necessary to count $0$values. With suitable adjustments, the same proofs apply to $\ell(s) := L(s) \frac{m^s}{a(m)}$, where $m$ is the index of the first nonvanishing coefficient, after noting that zeros of $\ell(s)$ are the same as zeros of $L(s)$. For generic $c$values, studying $\ell(s)$ doesn't suffice.
As is typical in this sort of proof, we will appeal to a result frequently called Littlewood's Lemma.
Suppose $a < b$ and that $f(s)$ is an analytic function on $\mathcal{R} = \{ s \in \mathbb{C} : a \leq \sigma \leq b, \lvert t \rvert \leq T\}$, where we write $s = \sigma + it$. Suppose that $f$ does not vanish on the right edge $\sigma = b$ of $\mathcal{R}$. Let $\mathcal{R}'$ be $\mathcal{R}$ minus the union of the horizontal cuts from zeros of $f$ in $\mathcal{R}$ to the left edge of $\mathcal{R}$. We fix a singlevalued branch of $\log f(s)$ in the interior of $\mathcal{R}'$. Denote by $\nu(\sigma, T)$ the number of zeros $\rho = \beta + i \gamma$ of $f(s)$ inside the rectangle with $\beta > \sigma$, including zeros with $\gamma = T$ but not those with $\gamma = T$. Then \begin{equation*} \int_{\mathcal{R}} \log f(s) ds =  2 \pi i \int_{a}^b \nu(\sigma, T) d\, \sigma. \end{equation*}
A proof can be found in Titchmarsh's book^{3} ^{3}Edward Charles Titchmarsh and DR HeathBrown. The theory of the Riemann zeta function. 1986. on $\zeta$. (Many many things can be found in that book). We give a very abbreviated proof sketch here. Cauchy's Theorem implies that \begin{equation*} \int_{\mathcal{R}'} \log f(s) ds = 0 \end{equation*} as $\log f$ is analytic in this domain. Thus the LHS in the lemma is $\int_{\mathcal{R}}$ minus the sum of the integrals around the paths of the cuts. The function $\log f(s)$ jumps by $2\pi i$ (or possibly a multiple of this, depending on whether the zeros are simple or if multiple zeros have the same height — the general proof covers this, but for ease let's suppose this doesn't happen) across these cuts. Then $\int_{\partial \mathcal{R}}$ is $ 2 \pi i$ times the total length of the cuts, which is the RHS.
Let $L \in \widetilde{S}$ with $a(1) = 1$. Fix $c \neq 1$. Then for any $b > \max\{ \tfrac{1}{2}, 1  \tfrac{1}{d_L} \}$, we have that \begin{equation*} \sum_{\substack{\beta_c > b \\ T < \gamma_c \leq 2T}} (\beta_c  b) \ll T. \end{equation*} Here the sum is over $c$values $\beta_c + i \gamma_c$.
We exclude $c = 1$ as $\lim L(s) = 1$ as $\sigma \to \infty$, which makes it more complicated to isolate $1$values.
As $L(s) \to 1$ as $\sigma \to \infty$, there exists $A = A(c) > 0$ such that $\Re \beta_c < A$ for all real parts $\beta_c$ of $c$values. Define \begin{equation*} \ell(s) = \frac{L(s)  c}{1  c}. \end{equation*} Clearly zeros of $\ell(s)$ correspond to $c$values of $L(s)$, and it suffices to count zeros of $\ell(s)$. Let $\nu(\sigma, T)$ denote the number of zeros $\rho_c$ of $\ell(s)$ with $\beta_c > \sigma$ and $T < \gamma_c \leq 2T$ (counting multiplicities).
Choose $a > \max\{A + 2, b\}$ (though we might choose it larger later), and define $\mathcal{R}$ to be the rectangle with vertices $a + iT, a + 2iT, b+2iT, b + iT$. Applying Littlewood's Lemma to $\ell(s)$ over $\mathcal{R}$ gives \begin{equation*} \int_{\mathcal{R}} \log \ell(s) ds =  2 \pi i \int_b^a \nu(\sigma, T) d\sigma. \end{equation*} We use $\log(z)$ to agree with the principal branch of the logarithm in a neighborhood of the bottomright point of the rectangle, around $a + iT$, and choose values for other points by continuous variation along line segments.^{4} ^{4}The branch doesn't matter, but this simplifies analysis of changes in argument later. Specifically, this assumption implies that $\arg \ell(\sigma + iT)$ and $\arg\ell(\sigma + 2iT)$ are both approximately $0$. We will choose $a$ sufficiently large that $\Re(\ell(a + iT)) > 1/2$, so there is no problem choosing the principal branch.
Here, we use $\log(z)$ to agree with the principal branch on the positive real axis for $\Re(z) \gg 1$, and define for other points by continuous variations along line segments.
The RHS is clear. We compute \begin{equation*} \int_b^a \nu(\sigma, T) d \sigma = \sum_{\substack{\beta_c > b \\ T < \gamma \leq 2T}} \int_b^{\beta_c} d\sigma = \sum_{\substack{\beta_c > b \\ T < \gamma \leq 2T}} (\beta_c  b) \end{equation*} and note this is realvalued. After multiplying by $2 \pi i$, it becomes imaginaryvalued, and we can isolate the imaginary part of the integral over $\mathcal{R}$. Thus we have that \begin{align*} 2 \pi \sum_{\substack{\beta_c > b \\ T < \gamma \leq 2T}} (\beta_c  b) &= \int_T^{2T} \log \lvert \ell(b + it) \rvert dt  \int_T^{2T} \log \lvert \ell(a + it) \rvert dt \\ &\quad + \int_b^a \arg \ell(\sigma + iT) d\sigma  \int_b^a \arg \ell(\sigma + 2iT) d\sigma. \end{align*} Let's denote these as $I_1, I_2, I_3, I_4$, in order.
Expanding the definition of $\ell(s)$, we see that \begin{equation*} I_1 = \int_T^{2T} \log \lvert L(b + it)  c \rvert dt  T \log \lvert 1  c \rvert. \end{equation*} Jensen's Inequality (the concave version in Theorem 3 of the first note in this series) implies the bound \begin{equation*} \frac{1}{2} \int_T^{2T} 2 \log \lvert L(b + it)  c \rvert dt \leq \frac{T}{2} \log \Big( \frac{1}{T} \int_T^{2T} \lvert L(b + it) \rvert^2 dt \Big). \end{equation*} The integral is bounded above by $O(T)$ by the Lindelöfonaverage result from Corollary 4 of the third note in this series. This is where we use the assumption that $b > \max\{\frac{1}{2}, 1  \frac{1}{d_L}\}$. $\ll T$. Adding in the remaining term of size $O(\log T)$, we find that \begin{equation*} I_1 \ll T. \end{equation*}
We now consider $I_2$, the second vertical integral. Morally, for large $a$ (and noting that choosing $a$ larger does not affect the number of $c$values), $\ell(a + it) \approx 1$. Thus $\log \lvert \ell(a + it) \rvert \approx 0$, and we should expect $I_2$ to be negligible in size.
We can prove this by rawly expanding the logarithm in Taylor series. As $a > 1$, we have that \begin{equation}\label{eq:ell_a_small} \ell(a + it) = \frac{L(s)  c}{1  c} = \frac{1  c}{1  c} + \frac{1}{1c} \sum_{n \geq 2} \frac{a(n)}{n^{a + it}} = 1 + \frac{1}{1c} \sum_{n \geq 2} \frac{a(n)}{n^{a + it}}. \end{equation} For $a$ sufficiently large,^{5} ^{5}possibly depending on $c$, but this is okay the absolute value of the second term can be bounded above by $1/2$, say. Expanding the logarithm gives \begin{equation*} \log \lvert \ell(a + it) \rvert = \Re \sum_{k \geq 1} \frac{(1)^k}{k(1c)^k} \sum_{n_1 = 2}^\infty \cdots \sum_{n_k = 2}^\infty \frac{a(n_1) \cdots a(n_k)}{(n_1 \cdots n_k)^{a + it}}, \end{equation*} implying that \begin{align*} I_2 &= \Re \sum_{k \geq 1} \frac{(1)^k}{k(1c)^k} \sum_{n_1 = 2}^\infty \cdots \sum_{n_k = 2}^\infty \frac{a(n_1) \cdots a(n_k)}{(n_1 \cdots n_k)^{a}}. \int_T^{2T} \frac{dt}{(n_1 \cdots n_k)^{it}} \\ &\ll \sum_{k \geq 1} \frac{1}{k} \Big( \sum_{n \geq 2} \frac{1}{n^{a  2  \epsilon}} \Big)^k \ll 1 \end{align*} for sufficiently large $a$. I note that we've used the trivial bound $\lvert a(n)\rvert \ll n$ here coming from the Ramanujan–Petersson conjecture on average for the sum. Thus $I_2 \ll 1$ for $a$ sufficiently large.
We now estimate the two horizontal integrals $I_3$ and $I_4$. Identical techniques apply to both. Recall that \begin{equation*} I_3 = \int_b^a \arg \ell(\sigma + iT) d\sigma. \end{equation*} If $\Re\, \ell(\sigma + iT)$ has $k$ zeros with $b \leq \sigma \leq a$, then we can partition $[b, a]$ into $k + 1$ subintervals on which $\Re\, \ell(\sigma + iT)$ is of constant sign. Note that the argument cannot change by more than $\pi$ on each subinterval, and thus the net change in argument^{6} ^{6}and thus essentially the maximum value of the argument within the integral, as the argument is $\approx 0$ at the right endpoint of the integral by our choice of branch of log. is bounded by $(k+1)\pi$.
We now estimate the number $k$ of zeros on the horizontal line segment. To do this, consider the function \begin{equation*} g(z) = \frac{1}{2} \Big( \ell(z + iT) + \overline{\ell(\overline{z} + iT)} \Big). \end{equation*} Then $g(\sigma) = \Re \ell(\sigma + iT)$, and the number of zeros of $g$ on the interval $[b, a]$ is the same as the number $k$. Note that $g$ is an integral function of order $1$ since $\ell$ is, and the completely general approach of bounding the number of zeros of integral functions of finite order applies, showing that $k \ll \log T$. For completeness we flesh this argument out.
Let $R = a  b$. Choose $T$ large enough to that $T > 2R$.^{7} ^{7}Recall that $a$ might be chosen very large for the bounding of $I_2$, but its size is independent of $T$. This implies that the set of $z$ for which $\lvert z  a \rvert < T$ lies entirely in the upper halfplane. Let $n(r)$ denote the number of zeros of $g(z)$ in $\lvert z  a \rvert \leq r$. We use the trivial integral bounds \begin{equation}\label{eq:nR} n(R) \log 2 = n(R) \int_R^{2R} \frac{dr}{r} \leq \int_0^{2R} \frac{n(r)}{r} dt. \end{equation} Using Jensen's Formula (Theorem 4 in the first note), we have that \begin{equation*} \int_0^{2R} \frac{n(r)}{r} dr = \frac{1}{2\pi} \int_0^{2\pi} \log \lvert g(a + 2Re^{i\theta}) \rvert  \log \lvert g(a) \rvert. \end{equation*}
By the Taylor expansion~\eqref{eq:ell_a_small} (and our choice of $a$ large), we see that $\log \lvert g(a) \rvert$ is bounded by a constant. The convexity bound for $\ell(s)$ (explicitly given in Theorem 7 from the first note, though simply knowing that there is a polynomial bound suffices) implies that $\log \lvert g(a + 2Re^{i\theta}) \rvert \ll \log T$, hence $n(R) \ll \log T$.
As the interval $[b, a]$ is contained in the disk $\lvert z  a \rvert \leq R$, we have that $k \leq n(R) = O(\log T)$, and thus $I_3 = O(\log T)$. The same bound applies for $I_4$. Combining these four bounds completes the proof.
$\square$
The proof of this theorem ends up bounding the size of four integrals. These integrals were

A vertical integral near the critical strip. To bound it, we used a Lindelöfonaverage type result.

A vertical integral far to the right, well within the region of absolute convergence. To bound it, we expanded the integrand in a series and naively bounded.

Two horizontal integrals with large (and larger) imaginary part. To bound them, we showed that they were bounded in practice by the number of zeros in thin horizontal strips, for which there are no more than $O(\log T)$ for fundamental growth reasons.
It is possible to change the left side of the rectangle and choose it instead to be far to the left, analogous to how the right hand side was chosen far to the right. Instead of appealing to a Lindelöfonaverage result to bound it, we could instead use the functional equation, Stirling's series to asymptotically estimate the gamma functions, and naive expansion for the Dirichlet series itself.
As $L(\sigma + it) \to 1$ as $\sigma \to 1$, for a given $c \neq 1$, there are positive constants $\tau, B$ such that there are no $c$values in the quarterplane $t > \tau$, $\sigma < B$. Choose $b < B  2$ and $T > \tau + 1$ (though as before we also want $b$ sufficiently negative so that the Dirichlet series, after applying the functional equation, is well within its region of absolute convergence; and we then only consider $T$ larger than $a  b$).
Write the functional equation in the form $L(s) = \gamma(s) \overline{L(1  \overline{s})}$. Then \begin{equation*} \log \lvert L(s)  c \rvert = \log \lvert \gamma(s) \rvert + \log \lvert \overline{L(1  \overline{s})} \rvert + \log\left( \bigg \lvert 1  \frac{c}{\lvert \gamma(s) \overline{L(1  \overline{s})} \rvert} \bigg \rvert \right). \end{equation*} An explicit Stirling approximation shows that \begin{equation*} \log \lvert \gamma(s) \rvert = (\tfrac{1}{2}  \sigma) \big(d_L \log t +\log( \alpha Q^2) \big) + O(\tfrac{1}{t}) \end{equation*} for $\lvert t \rvert > 1$, $\sigma$ restricted to a fixed interval. Here, $\alpha$ is as in Lemma 6 of the first note, giving Stirling's approximation for the gamma factor $\Delta(s)$, \begin{equation*} \alpha = \prod_{\nu = 1}^N \alpha_\nu^{2 \alpha_\nu}. \end{equation*}
Choosing $b$ sufficiently large and negative, for any $t \geq T$ we have that $L(1  (b + it)) \approx 1$ and $\lvert \delta(b + it) \rvert \gg 1$ (in fact it behaves like $t^{d_L(\tfrac{1}{2}  b)}$ by Stirling's approximation). Thus \begin{equation*} \log\Big( 1  \frac{c}{\Delta(s) \overline{L(1  \overline{s})}} \Big) = O\Big( \frac{1}{\lvert \Delta(s) L(1  s)\rvert} \Big) = O\big(\frac{1}{t}\big), \end{equation*} where the last approximation is very lossy, but sufficient.
The integral from this shifted left side of the rectangle, from $b + iT$ to $b + 2iT$, can thus be written \begin{align*} \int_T^{2T} \log \big\lvert L(b + it)  c \big\rvert dt &= (\tfrac{1}{2}  b) \int_T^{2T} \big(d_L \log t + \log(\alpha Q^2)\big) dt \\ &\quad+\int_T^{2T} \log \lvert L(1  b  it) \rvert dt + O(\log T). \end{align*} The first integral is completely explicit and can be directly computed.^{8} ^{8}A similar counting problem, though with slightly different methods, is the basic counting problem for zeros of $L$functions in the Selberg class. See for example Theorem 5.8 of Iwaniec and Kowalski's Analytic Number Theory, or indeed almost any treatment of the zeros of the zeta function for similar estimates. The second integral is small if $b$ is sufficiently large for precisely the same reason that~\eqref{eq:ell_a_small} is small in the proof of the previous theorem.
In total, we compute that \begin{align*} \int_T^{2T} &\log \lvert \ell(b + it) \rvert dt = \int_T^{2T} \log \big\lvert L(b + it)  c \big\rvert dt  T \log \big\lvert 1  c \big\rvert \\ &= (\tfrac{1}{2}  b) \big( d_L T \log \tfrac{4T}{e} + T \log(\alpha Q^2) \big)  T \log \lvert 1  c \rvert + O(\log T). \end{align*} Choosing now $\mathcal{R}$ to be the rectangle with corners $b + iT, a + iT, a + 2iT, b + 2iT$ with this choice of $a, b, T$, using this computation for the integral along the left vertical line segment, and applying the same techniques to bound $I_2$, $I_3$, and $I_4$ from Theorem 4 proves the following result.
Let $L \in \widetilde{S}$ with $a(1) = 1$. Let $c \neq 1$. Then for large negative $b$, \begin{align*} 2 \pi \sum_{{T < \gamma_c \leq 2T}} (\beta_c  b) &= (\tfrac{1}{2}  b) \big( d_L T \log \frac{4T}{e} + T \log(\alpha Q^2) \big) \\ &\quad T \log \lvert 1  c \rvert + O(\log T), \end{align*} where $\alpha = \prod \alpha_\nu^{2 \alpha_\nu}$. The sum is over all $c$values $\rho_c = \beta_c + i \gamma_c$ with $T < \gamma_c \leq 2T$.
Unweighting $c$values
The results in Theorem 4 and Theorem 5 count $c$values weighted by the distance between their real parts and a fixed line. By choosing two different lines and combining the weights, we can obtain an unweighted count of $c$values.
For $c \neq 1$, let $N^c(T)$ denote the number of $c$values of $L(s)$ (for $L(s)$ with $a(1) = 1$) with $T < \gamma_c \leq 2T$. Subtracting the primary asymptotic from Theorem 5 with $b + 1$ in place of $b$, from the asymptotic with $b$, counts \begin{equation*} \sum_{T < \gamma_c < 2T} (\beta_c  b)  \sum_{T < \gamma_c < 2T} (\beta_c  b  1) = \sum_{T < \gamma_c \leq 2T} 1 = N^c(T). \end{equation*} As a simple corollary to Theorem 5, we have proved the following.
Let $L \in \widetilde{S}$ with $a(1) = 1$. Let $c \neq 1$. Then \begin{equation*} N^c(T) = \frac{d_L}{2\pi} T \log \frac{4T}{e} + \frac{T}{2\pi} \log (\alpha Q^2) + O(\log T). \end{equation*}
Choosing $c = 0$ gives the standard zerocounting theorems. It is sometimes common to see the logarithmic main terms combined.
Almost all zeros are near the line
Let us now specialize exactly to Dirichlet series having degree $d_L = 2$, such as those coming from halfintegral weight modular forms (or fullintegral weight modular forms). We let $N(T) = N^0(T)$ count the number of zeros with imaginary part between $T$ and $2T$.
Then on the one hand, Corollary 6 shows that \begin{equation*} N(T) = \frac{1}{\pi} T \log \frac{4T}{e} + \frac{T}{2\pi} \log (\alpha Q^2) + O(\log T). \end{equation*}
We now count zeros to the right of the critical line. Define \begin{equation*} N^+(\sigma, T) = \# \{ \rho_c : T < \gamma_c \leq 2T, \beta_c > \sigma\}. \end{equation*} Let $\sigma > \max\{ \tfrac{1}{2}, 1  \frac{1}{d_L} \} = \frac{1}{2}$, and fix any $\sigma^* \in (\frac{1}{2}, \sigma)$. Then \begin{equation*} N^+(\sigma, T) \leq \frac{1}{\sigma  \sigma^*} \sum_{\substack{\beta_c > \sigma \\ T < \gamma_c \leq 2T}} (\beta_c  \sigma^*). \end{equation*} By Theorem 4, this is bounded by $O(T)$.
Thus there are on the order of $T \log T$ zeros in total, but only $O(T)$ zeros $\rho$ with $\Re \rho > \sigma > \frac{1}{2}$ for any fixed $\sigma > \frac{1}{2}$. The functional equation implies that nontrivial zeros are symmetric about the halfline, implying that there are at most $O(T)$ zeros of distance greater than $\sigma  \frac{1}{2}$ from the critical line, and on the order of $T \log T$ zeros within $\sigma  \frac{1}{2}$ of the critical line.
Choosing $\sigma = \frac{1}{2} + \epsilon$ for any $\epsilon > 0$ proves the following.
Let $L(s, f)$ be the Dirichlet series associated to a cuspidal halfintegral weight modular form with $a(1) = 1$. Then for any $\epsilon > 0$, $L(s, f)$ has $O_\epsilon(T)$ zeroes $\rho = \sigma + it$ with $\lvert \sigma  \frac{1}{2} \rvert > \epsilon$.
Asymptotically, onehundred percent of zeros of $L(s, f)$ occur within $\epsilon$ of the critical line.
For general degree, we have the following theorem.
Let $L(s)$ be a Dirichlet series in the extended Selberg class with $a(1) = 1$. Let $\sigma_0 = \max\{ \frac{1}{2}, 1  \frac{1}{d_L} \}$. For any $\epsilon > 0$, $L(s)$ has $O_\epsilon(T)$ zeros $\rho = \sigma + it$ with $\lvert \sigma  \sigma_0 \rvert > \epsilon$.
In the proof presented here, the primary obstruction for higher degree Dirichlet series are Lindelöfonaverage results (as in Corollary 4 of the third note in this series) or improved subconvexity results. Specifically the primary obstruction is bounding the integral $I_1$ in the proof of Theorem 4 on suitable lines $b$.
Leave a comment
Info on how to comment
To make a comment, please send an email using the button below. Your email address won't be shared (unless you include it in the body of your comment). If you don't want your real name to be used next to your comment, please specify the name you would like to use. If you want your name to link to a particular url, include that as well.
bold, italics, and plain text are allowed in
comments. A reasonable subset of markdown is supported, including lists,
links, and fenced code blocks. In addition, math can be formatted using
$(inline math)$
or $$(your display equation)$$
.
Please use plaintext email when commenting. See Plaintext Email and Comments on this site for more. Note also that comments are expected to be open, considerate, and respectful.
Comments (2)
20240305 Chris
Have you published this yet? There is a typo in the second display equation. It should have a $\Lambda$ instead of a $\lambda$.
20240306 DLD
Thank you! I've fixed the typo.
I haven't published these yet. I'm going to try to put this in a publishable form in the next couple of months.