Abstract
It is of significant interest in many applications to sample from a high-dimensional target distribution π with the density π(dx)∝e−U(x)(dx), based on the temporal discretization of the Langevin stochastic differential equations (SDEs). In this paper, we propose an explicit projected Langevin Monte Carlo (PLMC) algorithm with non-convex potential U and super-linear gradient of U and investigate the non-asymptotic analysis of its sampling error in total variation distance. Equipped with time-independent regularity estimates for the corresponding Kolmogorov equation, we derive the non-asymptotic bounds on the total variation distance between the target distribution of the Langevin SDEs and the law induced by the PLMC scheme with order O(h| ln h|). Moreover, for a given precision ϵ, the smallest number of iterations of the classical Langevin Monte Carlo (LMC) scheme with the non-convex potential U and the globally Lipshitz gradient of U can be guaranteed by order O(d3/2ϵ⋅ln(dϵ)⋅ln(1ϵ)). Numerical experiments are provided to confirm the theoretical findings.
Original language | English |
---|---|
Place of Publication | Ithaca, NY |
Number of pages | 31 |
DOIs | |
Publication status | Published - 28 Dec 2023 |
Keywords
- Langevin Monte Carlo samplin
- total variation distance
- non-convex potential
- projected scheme
- Kolmogorov equations