One-dimensional convex sub-Gaussian comparison constant
Description of constant
Let $X$ be an integrable real random variable. We say that $X$ is $1$-sub-Gaussian in the tail sense if \(\mathbf{E}[X]=0 \quad\text{and}\quad \mathbf{P}(\lvert X\rvert>t)\le 2e^{-t^2/2}\quad\text{for all }t\ge 0.\) [DP2026-abs-solved] [DP2026-def-tail]
For real random variables $X,Y$, write $X\preceq_{cx}Y$ if \(\mathbf{E}[f(X)]\le \mathbf{E}[f(Y)]\) for every convex $f:\mathbf{R}\to\mathbf{R}$ for which both expectations are finite. [DP2026-def-constant]
Let $G\sim\mathcal{N}(0,1)$ be standard normal. We define \(C_{48}:=\inf\Bigl\{C>0:\ \forall\ \text{$1$-sub-Gaussian }X,\ X\preceq_{cx}\sqrt{C}\,G\Bigr\}.\) [DP2026-def-constant]
Davis and Power proved that this one-dimensional problem is solved: if $c_\star$ denotes the sharp comparison factor, then $C_{48}=c_\star^2=c_0^2$, where $c_0$ is determined by an explicit system of one-dimensional equations. Numerically, \(C_{48}=c_0^2 \approx 5.33386.\) [DP2026-abs-solved] [DP2026-thm1-sharp] [DP2026-rem2-num]
Known upper bounds
| Bound | Reference | Comments |
|---|---|---|
| $<\infty$ | [vH25] | Historical finiteness bound: van Handel proved that a universal Gaussian comparator exists in every dimension. [vH25-thm11] |
| $c_0^2 \approx 5.33386$ | [DP2026] | Solves the one-dimensional problem exactly. [DP2026-thm1-sharp] [DP2026-rem2-num] |
Known lower bounds
| Bound | Reference | Comments |
|---|---|---|
| $1$ | Elementary | Taking $X\sim\mathcal{N}(0,1)$ and testing with $f(x)=x^2$ gives $1=\mathbf{E}[X^2]\le \mathbf{E}[Z^2]=C$, so any admissible $C$ must satisfy $C\ge 1$. |
| $c_0^2 \approx 5.33386$ | [DP2026] | The sharpness part of Theorem 1 shows that no smaller constant works. [DP2026-thm1-sharp] [DP2026-rem2-num] |
Additional comments and links
- Solved case. The paper of Davis and Power determines the sharp one-dimensional constant and shows that it is attained by an extremal distribution saturating the tail constraint. [DP2026-abs-solved]
- Higher dimensions remain open. The case of general $d\ge 2$ is still open, although van Handel proved that some universal dimension-free comparator exists. [DP2026-dim-open] [vH25-thm11]
- Partial higher-dimensional consequences. Davis and Power also prove a sequential tensorization principle for multivariate convex domination, and a dimension-free Gaussian comparator for the cone generated by convex ridge functions. [DP2026-abs-solved]
- A Strassen-type reformulation of van Handel’s theorem is given as Corollary 1.2: one can construct $X$ and a standard Gaussian $G$ on a common space so that $X=c\,\mathbf{E}[G\mid X]$. [vH25-cor12]
- Historical discussion: MathOverflow question on sub-Gaussian variables and convex ordering.
References
- [DP2026] Davis, Damek; Power, Sam. The sharp one-dimensional convex sub-Gaussian comparison constant. arXiv:2604.03170 (2026). DOI: 10.48550/arXiv.2604.03170. arXiv PDF: arXiv:2604.03170. Google Scholar
- [DP2026-abs-solved] loc: arXiv PDF p.1, Abstract. quote: “Let $X$ be an integrable real random variable with mean zero and two-sided sub-Gaussian tail $P(|X| > t) \le 2e^{-t^2/2}$ for all $t \ge 0$. We determine the smallest constant $c_\star$ such that $X$ is dominated in convex order by $c_\star G$, where $G$ is standard normal. Equivalently, $c_\star^2$ is the sharp one-dimensional convex sub-Gaussian comparison constant appearing in the Optimization Constants in Mathematics repository [DITc26]. We show that $c_\star$ is given by an explicit system of one-dimensional equations and is attained by an extremal distribution that saturates the tail constraint. Numerically, $c_\star \approx 2.30952$ (so $c_\star^2 \approx 5.33386$).”
- [DP2026-def-tail] loc: arXiv PDF p.1, Section 1 “Setup and the constant”, equation (1). quote: “We call $X$ $1$-sub-Gaussian in the tail sense if $E[X] = 0$ and $P(|X| > t) \le s_G(t) := \min{1, 2e^{-t^2/2}}$ for all $t \ge 0$.”
- [DP2026-def-constant] loc: arXiv PDF p.1, Section 1 “Setup and the constant”, equation (2). quote: “Define the one-dimensional comparison constant $c_\star := \inf{c > 0 : \text{every } X \text{ satisfying (1) obeys } X \preceq_{cx} cG}$, where $X \preceq_{cx} Y$ denotes convex domination: $E[f(X)] \le E[f(Y)]$ for every convex $f : \mathbf{R} \to \mathbf{R}$ for which both expectations are finite.”
- [DP2026-thm1-sharp] loc: arXiv PDF pp.1–2, Theorem 1. quote: “Theorem 1 (Sharp one-dimensional convex sub-Gaussian comparison). The sharp constant in (2) satisfies $c_\star = c_0$, where $c_0$ is defined by (3)–(6). In particular: 1. For every random variable $X$ satisfying (1) and every convex $f : \mathbf{R} \to \mathbf{R}$, $E[f(X)] \le E[f(c_0G)]$ whenever the right-hand side is finite. 2. For every $c < c_0$, there exist a random variable $X^\star$ satisfying (1) and a convex function $f$ such that $E[f(X^\star)] > E[f(cG)]$. … Consequently, the one-dimensional value of the constant $C_{48}$ in [DITc26] is $C_{48}^{(1)} = c_0^2$.”
- [DP2026-rem2-num] loc: arXiv PDF p.2, Remark 2. quote: “A direct high-precision evaluation of (3)–(6) gives $a \approx 1.80334$, $p_0 \approx 0.39342$, $z \approx 0.27041$, $c_0 \approx 2.30952$, $c_0^2 \approx 5.33386$. No numerical computation is used in the derivation of the exact characterization $c_\star = c_0$.”
- [DP2026-dim-open] loc: arXiv PDF p.2, end of Remark 3 / start of discussion after it. quote: “The case of general $d \ge 2$ remains open.”
- [vH25] van Handel, Ramon. On the subgaussian comparison theorem. arXiv:2512.18588 (2025). DOI: 10.48550/arXiv.2512.18588. arXiv PDF: arXiv:2512.18588. Google Scholar. Author PDF
- [vH25-thm11] loc: Author PDF p.1, Theorem 1.1. quote: “Let $X$ be any $1$-subgaussian random vector in $\mathbf{R}^n$ and $G \sim N(0, I_n)$ be a standard Gaussian vector in $\mathbf{R}^n$. Then $\mathbf{E}[f(X)] \leq \mathbf{E}[f(cG)]$ for every convex function $f : \mathbf{R}^n \to \mathbf{R}$, where $c$ is a universal constant.”
- [vH25-cor12] loc: Author PDF p.1, Corollary 1.2. quote: “There is a universal constant $c$ such that for every $1$-subgaussian vector $X$ in $\mathbf{R}^n$, we can construct $X$ and a standard Gaussian vector $G \sim N(0, I_n)$ on a common probability space such that $X = c\mathbf{E}[G|X]$.”