The KLS (Kannan–Lovász–Simonovits) constant for log-concave measures

Description of constant

$C_{20c}$ is the KLS constant (Kannan–Lovász–Simonovits constant) for log-concave measures. It is defined as

\[C_{20c} := \sup_{n\ge 1} \psi_n,\]

where $\psi_n$ is the worst-case inverse Cheeger (isoperimetric) constant among isotropic log-concave probability measures on $\mathbb R^n$.

More precisely, let $\mu$ be a log-concave probability measure on $\mathbb R^n$ (i.e. $\mu$ has density $\rho(x)=e^{-V(x)}$ for some convex $V:\mathbb R^n\to\mathbb R\cup{+\infty}$). For a Borel set $A\subset\mathbb R^n$, define the (outer) Minkowski boundary measure

\[\mu^+(A) := \liminf_{\varepsilon\to 0^+} \frac{\mu(A_\varepsilon)-\mu(A)}{\varepsilon}, \qquad A_\varepsilon := \{x\in\mathbb R^n:\operatorname{dist}(x,A)\le \varepsilon\}.\]

The Cheeger constant (isoperimetric coefficient) of $\mu$ is

\[h_\mu := \inf_A \frac{\mu^+(A)}{\min(\mu(A),1-\mu(A))},\]

and the corresponding inverse Cheeger constant is

\[\psi_\mu := \frac{1}{h_\mu}.\]

We say that $\mu$ is isotropic if it has barycenter $0$ and covariance matrix $\mathrm{Cov}(\mu)=I_n$. One then defines

\[\psi_n := \sup\{\psi_\mu : \mu \text{ is an isotropic log-concave probability measure on }\mathbb R^n\}.\]

The KLS conjecture asserts that $C_{20c}<\infty$, i.e. that $\psi_n=O(1)$ uniformly in $n$ (and, in a stronger form, that the infimum defining $h_\mu$ is attained up to constants by half-spaces).

It is often convenient to work with the Poincaré (spectral gap) constant $C_P(\mu)$, defined as the smallest constant such that

\[\mathrm{Var}_\mu(f)\le C_P(\mu)\int |\nabla f|^2\,d\mu\]

for all smooth enough $f$. For log-concave measures, $C_P(\mu)$ is equivalent up to universal factors to $\psi_\mu^2$; for instance one has

\[\frac{1}{\pi}\,\psi_\mu^2 \ \le\ C_P(\mu)\ \le\ 4\,\psi_\mu^2.\]

Known upper bounds

Since a dimension-free upper bound is not known, bounds are stated for $\psi_n$ as a function of $n$.

Bound Reference Comments
$\psi_n \le C\sqrt{n}$ [KLS1995] First general polynomial bound (via localization lemma); more generally $\psi_\mu \le \sqrt{\mathrm{Tr}(\mathrm{Cov}(\mu))}$.
$\psi_n \le C n^{1/4}$ [LV2024] Improves the best previous polynomial exponent; based on stochastic localization. (Originally appeared in FOCS 2017.)
$\psi_n \le \exp\big(C\sqrt{\log n}\,\log\log n\big)$ [Che2021] First subpolynomial bound (equivalently, $\psi_n=n^{o(1)}$).
$\psi_n \le C(\log n)^5$ [KL2022] First polylogarithmic bound.
$\psi_n \le C(\log n)^{3.2226\ldots}$ [JLV2022] Improves the polylog exponent.
$\psi_n \le C(\log n)^{3.082\ldots}$ [K2023] Lehec (personal communication), as reported in [K2023].
$\psi_n \le C\sqrt{\log n}$ [K2023] Current best general bound (Theorem 1.2 of [K2023]).

Known lower bounds

Bound Reference Comments
$\sqrt{\pi/2} \approx 1.25331$ Classical For the standard Gaussian measure, isoperimetric minimizers are half-spaces and $h_\gamma=\sqrt{2/\pi}$, hence $\psi_\gamma=\sqrt{\pi/2}$.

References

Contribution notes

This page was prepared with the assistance of ChatGPT 5.2 Pro.