Analytic Number Theory Exponent Database

14 Distribution of primes: long ranges

Let \(\Lambda (n)\) denote the von Mangoldt function, i.e. \(\Lambda (n) = \log p\) if \(n = p^m\) where p is prime and \(m\) is a positive integer, and \(\Lambda (n) = 0\) otherwise.

Definition 14.1
#

For all \(x \ge 1\) define the Chebyshev prime counting functions \(\psi (x)\), \(\theta (x)\) and \(\pi (x)\) as

\[ \psi (x) := \sum _{n \le x}\Lambda (n),\qquad \theta (x) := \sum _{p \le x}\log p,\qquad \pi (x) := \sum _{p \le x}1 \]

where the first sum is over positive integers \(n\) and the last two sums are over primes \(p\).

These functions, particularly \(\pi (x)\), are central to number theory because they measure the distribution of prime numbers among the integers. A well-known result is the prime number theorem.

Theorem 14.2 Prime number theorem
#

As \(x \to \infty \),

\[ \pi (x) \sim \frac{x}{\log x} \sim \operatorname {li}(x) := \int _2^{\infty }\frac{dt}{\log t}. \]

The following are equivalent formulations of the prime number theorem.

Theorem 14.3 Prime number theorem, alternative formulations
#

As \(x \to \infty \), one has \(\psi (x) \sim x\) and \(\theta (x) \sim x\).

14.1 Error bounds for prime counting functions

In addition to their asymptotic behaviour, various bounds on the deviation from their respective asymptotics are known. The current best-known error bounds are derived from zero-free regions of the Riemann zeta function \(\zeta (s)\). The relation between zeroes of \(\zeta (s)\) and error bounds for prime counting functions are illustrated through von Mangoldt’s explicit formula: for all non-integer \(x {\gt} 0\), one has

\[ \psi (x) = x - \sum _{\rho }\frac{x^\rho }{\rho } - \log 2\pi - \frac{1}{2}\log (1 - x^{-2}), \]

where \(\rho \) runs through all non-trivial zeroes of \(\zeta (s)\).

Theorem 14.4 Korobov–Vinogradov estimate
#

There exists a positive constant \(A\), such that

\begin{align*} \psi (x) - x, \; \theta (x) - x, \; \pi (x) - \operatorname {li}(x) \ll x\exp \left(-A\frac{(\log x)^{3/5}}{(\log \log x)^{1/5}}\right). \end{align*}

Table 14.1 lists the historical progression on estimates of \(\pi (x)\).

Table 14.1 Historical estimates of \(\pi (x)\), for \(x\) sufficiently large.

Reference

Estimate of \(\pi (x)\)

Chebyshev

\(c_1 \dfrac {x}{\log x} \leq \pi (x) \leq c_2 \dfrac {x}{\log x}\) for some constants \(0 {\lt} c_1 {\lt} 1 {\lt} c_2\), i.e. \(\pi (x) \asymp \dfrac {x}{\log x}\)

de la Vallée Poussin [ 56 ] , Hadamard [ 89 ]

\(\pi (x) = \dfrac {x}{\log x}(1 + o(1))\) i.e. \(\pi (x) \sim \dfrac {x}{\log x}\)

de la Vallée Poussin [ 57 ]

\(\pi (x) = \operatorname {li}(x) + O(x\exp (-A\sqrt{\log x}))\) for some \(A {\gt} 0\)

Littlewood [ 193 ]

\(\pi (x) = \operatorname {li}(x) + O(x\exp (-A\sqrt{\log x\log \log x}))\) for some \(A {\gt} 0\)

Korobov, Vinogradov [ 285 ]

\(\pi (x) = \operatorname {li}(x) + O\left(x\exp \left(-\dfrac {A(\log x)^{3/5}}{(\log \log x)^{1/5}}\right)\right)\) for some \(A {\gt} 0\)

Under the Riemann hypothesis, stronger error bounds are known.

Theorem 14.5 [ 168 ]
#

If the Riemann hypothesis is true, then

\[ \psi (x) - x,\; \theta (x) - x \ll x^{1/2}(\log x)^2,\qquad \pi (x) - \operatorname {li}(x) \ll x^{1/2}\log x. \]

Slightly sharper estimates are possible if one assumes even stronger hypotheses.

Theorem 14.6 Heath-Brown [ 109 ]
#

Assume that the Riemann hypothesis is true. Furthermore, assume that

\[ F_T(X) := \sum _{0 {\lt} \gamma _1, \gamma _2 \le T}\frac{e((\gamma _1 - \gamma _2)X)}{1 + (\gamma _1 - \gamma _2)^2/4} = o(T^2 (\log T)^2) \]

where the sum is over the imaginary parts of all pairs of non-trivial zeroes of \(\zeta (s)\). Then

\[ \psi (x) = x + o(x^{1/2}(\log x)^2). \]

The same result was previously proved (assuming stronger hypotheses) by Gallagher–Mueller [ 80 ] and later by Mueller.

14.2 Relation to zero free region of zeta

Lemma 14.7 Relation to zero free regions
#

[ 139 ] Suppose \(\zeta (\sigma + it) \ne 0\) for \(\sigma \ge 1 - \eta (t)\) where \(\eta (t)\) is a positive and decreasing function. Then

\[ \psi (x) - x \ll x \exp \left(-A \omega (x) \right)\qquad (x \to \infty ) \]

for an absolute constant \(A {\gt} 0\), where

\[ \omega (x) := \inf _{t \ge 1}(\eta (t) \log x + \log t). \]

Applying Lemma 14.7, one obtains the error term estimates in the prime number theorem given in Table 14.2.

Table 14.2 Zero free regions for \(\zeta (s)\), along with the bound on \(\psi (x) - x\) that they imply. Here \(A\) represents an absolute, positive constant, which may be different at each occurrence.

Reference

Zero free region

Bound on \((\psi (x) - x)/x\)

Theorem 13.6

\(\sigma \ge 1 - \dfrac {A}{\log t}\)

\(\exp (-A(\log x)^{1/2})\)

Theorem 13.7

\(\sigma \ge 1 - \dfrac {A\log \log t}{\log t}\)

\(\exp (-A(\log x \log \log x)^{1/2})\)

Theorem 13.8

\(\sigma \ge 1 - \dfrac {A}{(\log t)^{3/4 + o(1)}}\)

\(\exp (-A(\log x)^{4/7 + o(1)})\)

Theorem 13.9

\(\sigma \ge 1 - \dfrac {A}{(\log t)^{2/3}(\log \log t)^{1/3}}\)

\(\exp \left(-A\dfrac {(\log x)^{3/5}}{(\log \log x)^{1/5}}\right)\)

The following type of converse statement is also known.

Theorem 14.8 [ 281 ] Theorem 40.1
#

If for some \(0 {\lt} \alpha \le 1\) one has

\[ \psi (x) - x \ll x \exp (-A(\log x)^{1/(1 + \alpha )})\qquad (x \to \infty ) \]

then \(\zeta (\sigma + it) \ne 0\) for \(t\) sufficiently large and

\[ \sigma {\gt} 1 - \frac{A}{(\log t)^{\alpha }}. \]

Here \(A\) denotes an absolute positive constant, not necessarily the same at each occurrence.

14.3 Omega results

In the opposite direction, it is known that

Theorem 14.9 Schmidt [ 260 ]
#

As \(x \to \infty \),

\[ \psi (x) = x + \Omega (x^{1/2}). \]

This can be improved slightly conditioned on the Riemann hypothesis.

Theorem 14.10 Littlewood [ 192 ]
#

If the Riemann hypothesis is true, then as \(x \to \infty \),

\[ |\pi (x) - \operatorname {li}(x)| = \Omega \left(x^{1/2}\frac{\log \log \log x}{\log x}\right). \]

Furthermore it is also known that

Theorem 14.11 Grosswald [ 87 ]
#

If

\[ \theta = \sup _{\rho : \zeta (\rho ) = 0}\Re \rho {\gt} 1/2 \]

then as \(x \to \infty \),

\[ \psi (x) = x + \Omega (x^{\theta }). \]