Sequential Neural Posterior and Likelihood Approximation
Preprint, 2021
chain Monte Carlo sampling and correction-steps of the parameter proposal function that are introduced in similar methods, but that can be numerically unstable or restrictive. By utilizing the reverse KL divergence, SNPLA manages to learn both the likelihood and the posterior in a sequential manner. Over
four experiments, we show that SNPLA performs competitively when utilizing the same number of model simulations as used in other methods, even though the inference problem for SNPLA is more complex due to the joint learning of posterior and likelihood function. Due to utilizing normalizing flows SNPLA generates posterior draws much faster (4 orders of magnitude) than MCMC-based methods.
Author
Samuel Wiqvist
Lund University
Jes Frellsen
Technical University of Denmark (DTU)
Umberto Picchini
Chalmers, Mathematical Sciences, Applied Mathematics and Statistics
University of Gothenburg
Subject Categories
Probability Theory and Statistics