Sequential Neural Posterior and Likelihood Approximation
Preprint, 2021

We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm. SNPLA
is a normalizing flows-based algorithm for inference in implicit models, and therefore is a simulation-
based inference method that only requires simulations from a generative model. SNPLA avoids Markov
chain Monte Carlo sampling and correction-steps of the parameter proposal function that are introduced
in similar methods, but that can be numerically unstable or restrictive. By utilizing the reverse KL
divergence, SNPLA manages to learn both the likelihood and the posterior in a sequential manner. Over
four experiments, we show that SNPLA performs competitively when utilizing the same number of model
simulations as used in other methods, even though the inference problem for SNPLA is more complex
due to the joint learning of posterior and likelihood function. Due to utilizing normalizing flows SNPLA
generates posterior draws much faster (4 orders of magnitude) than MCMC-based methods.

Författare

Samuel Wiqvist

Lunds universitet

Jes Frellsen

Danmarks Tekniske Universitet (DTU)

Umberto Picchini

Chalmers, Matematiska vetenskaper, Tillämpad matematik och statistik

Ämneskategorier

Sannolikhetsteori och statistik

Mer information

Senast uppdaterat

2022-02-16