Tuesday, June 24, 2014

Marginal likelihoods via power posteriors and stepping stones in Julia

In a previous post I looked at how to calculate marginal likelihoods for a simple model in R. For those of you interested in Julia, here is the example, ported to Julia. I haven't included the model switch integration - I leave that as an exercise, but feel free to post a comment if you want it posted here. The following, in the form of an IJulia notebook, can be downloaded as a gist.

To recap, under the Bayesian paradigm, inference is based on the posterior probability over the parameters of interest. It's helpful to think of our inferences being conditional on a given model, $M$ with a parameter vector $\theta \in \Theta$. Given a dataset, $D$, and a model, the posterior distribution of the parameter values is given by Bayes' theorem.

\begin{align} Pr(\theta|D,M) = \frac{Pr(D|\theta,M)Pr(\theta|M)}{Pr(D|M)} \end{align}

$Pr(D|\theta,M)$ is the likelihood function, $Pr(\theta|M)$ is the prior probability , and $Pr(D|M)$ is known as the marginal likelihood, predictive probability, or evidence. $Pr(D|M)$ is a normalising constant that ensures that $Pr(\theta|D,M)$ is a probability.

\begin{align} Pr(D|M) = \int_{\Theta}\Pr(D|\theta,M)Pr(\theta|M) d\theta \end{align}

However, one often wants to compare the fit of different models. As a function of the model $M$, the marginal likelihood can be interpreted sa the likelihood of the model $M$ given the data $D$. Hence, to choose between several models, one simply chooses the one with the highest marginal likelihood. When comparing two models,say $M_0$ and $M_1$, a ratio of marginal likelihoods, known as the Bayes factor, $K$, is usually defined:

\begin{align} K_{01} = \frac{Pr(D|M_1)}{Pr(D|M_0)} \end{align}

Interpretations of the Bayes factor have been provided by Jeffreys and Kass and Raftery.

Unfortunately, in most cases, it is not possible to obtain an exact solution of the marginal likelihood. A number of approaches have been described to obtain an approximate numerical estimate of the marginal likelihood; here, I illustrate two approaches based on tempering.

A simple example

Before I describe what tempering means in this context, let's consider a simple example, for which there is an analytical solution for the marginal likelihood. Consider the problem of fitting a set of $n=100$ exponential random variables, $X$ with parameter $\lambda=3$.

We can generate these in Julia as follows.

In [1]:
using Distributions
srand(1)
lambda=3.0
n=100
x=rand(Exponential(1.0/lambda),n);

However, as I want to compare with my R code, I'll use the same random data as I did in my R code.

In [2]:
x=[0.251727277709448, 0.393880926369035, 0.0485689089012643, 0.0465984206228326, 
0.145356208593058, 0.964989512488025, 0.409854017804964, 0.179894279999038, 
0.318855831232818, 0.0490153301679842, 0.463578376270143, 0.254009951823358, 
0.41253451691161, 1.47464473922761, 0.351514389103556, 0.34508131534358, 
0.625345057469416, 0.218248879071325, 0.11231115879491, 0.196159907151014, 
0.788171751007882, 0.213964196077238, 0.0980401292132835, 0.18862184152628, 
0.0353575410942237, 0.0198130534651379, 0.192904154459635, 1.31964428405416, 
0.39110403527359, 0.332270985081281, 0.478428447791025, 0.0124228421288232, 
0.108003384278466, 0.44015597643793, 0.0678367833438553, 0.340908625770017, 
0.100580311380327, 0.241738101143046, 0.250514230575028, 0.0783424836236563, 
0.359960379064981, 0.342748967916672, 0.430753882558053, 0.417701784817779, 
0.18488046588997, 0.100427665458216, 0.431041552050316, 0.331518596024219, 
0.17139143201833, 0.669277467426467, 0.140747481646637, 0.72625752274651, 
1.07259633924935, 0.185943118296564, 0.19820588392516, 0.325798601941002, 
0.0699555268511176, 0.103149285384764, 0.368645422767504, 0.258062588036958, 
0.0298913594374683, 0.36939221892146, 0.0824214177800864, 0.523995615513929, 
1.61093758162402, 0.143710710729162, 0.910129771215431, 0.378943805090106, 
0.271122748654481, 0.279002163557281, 0.594921801513742, 0.770823875400138, 
0.969962422990418, 0.0951969947976371, 0.129595590685436, 0.0173518159426749, 
0.117290165961365, 0.521747115103197, 0.271511935442642, 0.919747929357203, 
0.128731187824502, 0.336085485590899, 0.272838062945121, 0.0197537352020542, 
0.761284491005254, 0.268056970386112, 0.527898693352688, 0.411263835931219, 
0.448548006190264, 0.700125743282549, 0.345032382432028, 0.149150631080071, 
0.347869504850014, 0.0869427483218412, 0.227076369337738, 0.0879127546757239, 
0.148868551484028, 0.0702022976862887, 0.0441904686409075, 0.116296115132973
]
Out[2]:
100-element Array{Float64,1}:
 0.251727 
 0.393881 
 0.0485689
 0.0465984
 0.145356 
 0.96499  
 0.409854 
 0.179894 
 0.318856 
 0.0490153
 ⋮        
 0.149151 
 0.34787  
 0.0869427
 0.227076 
 0.0879128
 0.148869 
 0.0702023
 0.0441905
 0.116296 

The likelihood of the data given the rate parameter $\lambda$ is as follows.

\begin{align} Pr(X|\lambda) & = \prod_{i=1}^{n=100} \lambda \rm{exp}(-\lambda x_i) \cr & = \lambda^n \rm{exp}(-\lambda n \bar{x}) \end{align}

where $\bar{x}$ is the sample mean of $X$.

As described in Wikipedia (which is remarkably handy for distributions), if we assume a Gamma($\alpha$,$\beta$) prior on the rate coefficient, the posterior distribution of $\lambda$ is Gamma($\alpha+n$,$\beta+n \bar{x}$), the conjugate prior for an exponential distribution.

\begin{align} Pr(\lambda|X,\alpha, \beta) & \propto Pr(X|\lambda) \times Pr(\lambda| \alpha, \beta) \cr & = \lambda^n \rm{exp}(-\lambda n \bar{x}) \times \frac{\beta^\alpha}{\Gamma(\alpha)} \lambda^{\alpha-1} \rm{exp}(-\lambda \beta) \cr & = \frac{\beta^\alpha}{\Gamma(\alpha)} \lambda^{\alpha+n-1} \rm{exp}(-\lambda (\beta + n \bar{x})) \end{align}

The marginal likelihood of this model can be calculated by integrating $Pr(X|\lambda) \times Pr(\lambda| \alpha, \beta)$ over $\lambda$.

\begin{align} \int_{\lambda=0}^{\infty}\frac{\beta^\alpha}{\Gamma(\alpha)} \lambda^{\alpha+n-1} \rm{exp}(-\lambda (\beta + n \bar{x})) \; d\lambda & = \frac{\beta^\alpha}{\Gamma(\alpha)} \int_{0}^{\infty}\lambda^{\alpha+n-1} exp(-\lambda (\beta + n \bar{x})) \; d\lambda \cr & = \frac{\beta^\alpha}{\Gamma(\alpha)} \frac{\Gamma(\alpha+n)}{(\beta+ n \bar{x})^{a+n}} \end{align}

The log marginal likelihood in Julia can be calculated as follows.

In [3]:
function lml(x::Vector{Float64},alpha::Float64,beta::Float64)
    mux=mean(x)
    n=convert(Float64,length(x))
    alpha*log(beta)-(alpha+n)*log(beta+n*mux)+lgamma(alpha+n)-lgamma(alpha)
end
Out[3]:
lml (generic function with 1 method)

For $\alpha=1$ and $\beta=1$, the log marginal likelihood for these data is around 3.6.

In [4]:
alpha=1.0
beta=1.0
lml(x,alpha,beta)
Out[4]:
3.6274358925479078

In many cases, however, we don't have an analytical solution to the posterior distribution or the marginal likelihood.

Tempering

Several approaches to calculating marginal likelhoods are based on the idea of tempering, in which we consider running MCMC at a range of different (inverse) 'temperatures', obtaining by raising likelihood to a power between 0 and 1; when the power is 0, we sample from the prior, while when the power is 1, we sample from the posterior. While we use the tempered likelihood to determine acceptance probabilities in the MCMC, we will use samples of the untempered likelihood to compute the marginal likelihood.

In [5]:
function met_temper(x::Vector{Float64},lambda0::Float64,alpha::Float64,beta::Float64,sigma::Float64,temp::Float64,niters::Int64)
    lambdavec=vec(Array(Float64,niters))
    llikvec=vec(Array(Float64,niters))
    lliktvec=vec(Array(Float64,niters))
    lpriorvec=vec(Array(Float64,niters))
    lpostvec=vec(Array(Float64,niters))
    lambda=lambda0
    llik=sum(logpdf(Exponential(1.0/lambda),x))
    llikt=temp*llik
    lprior=logpdf(Gamma(alpha,beta),lambda)
    lpost=llik+lprior
    for i in 1:niters
        lambdas = lambda + rand(Normal(0,sigma),1)[1]
        if lambdas>0
            lliks=sum(logpdf(Exponential(1/lambdas),x))
            llikst=temp*lliks
            lpriors=logpdf(Gamma(alpha,beta),lambdas)
            lposts=llikst+lpriors
            A=exp(lposts-lpost)
            if rand()<A
                lambda,llik,llikt,lprior,lpost=lambdas,lliks,llikst,lpriors,lposts
            end
        end
        lambdavec[i],llikvec[i],lliktvec[i],lpriorvec[i],lpostvec[i]=lambda,llik,llikt,lprior,lpost
    end
    [lambdavec llikvec lliktvec lpriorvec lpostvec]
end
Out[5]:
met_temper (generic function with 1 method)

I first run the chain at a range of temperatures.

In [6]:
tempvec=linspace(0,1.0,101).^5.0 # Range of temperatures
numtemp=length(tempvec)
pplist=vec(Array(Matrix{Float64},numtemp))
burnin=1000
niters=10000
lambda0=1.0 # Initial condition for lambda
sigma=1.0 # Dispersion for the random walk sampler
for i in 1:numtemp
    out=met_temper(x,lambda0,alpha,beta,sigma,tempvec[i],niters+burnin)
    pplist[i]=out
end

Power posteriors

The power posterior approach is based on integrating the expectation of the log likelihood (for a given chain) across the inverse temperatures (see Friel and Pettit (2008) and Lartillot and Philippe (2006). Friel and Pettitt used a trapezoidal scheme to integrate the likelihood, while a revised scheme used an improved trapezoidal scheme, which also requires the variance of the log likelihood.

I first extract the mean and the variance of the log likelihoods.

In [7]:
ell=zeros(numtemp)
vll=zeros(numtemp)
for i in 1:numtemp
    out=pplist[i]
    ell[i]=mean(out[burnin:niters,2])
    vll[i]=var(out[burnin:niters,2])
end

The following uses this information to compute the log marginal likelihood, using the modified trapezoidal scheme.

In [8]:
function ppml(ell::Vector{Float64},vll::Vector{Float64},tempvec::Vector{Float64})
    N=length(ell)
    res=0.0
    for i in 1:(N-1)
        wts = tempvec[i+1]-tempvec[i]
        res = res+wts*((ell[i+1]+ell[i])/2.0)-((wts^2)/12.0)*(vll[i+1]-vll[i])
    end
    res
end
Out[8]:
ppml (generic function with 1 method)

This function calculates the bounds on the marginal likelihood, based on the error from integration.

In [9]:
function boundml(ell::Vector{Float64},tempvec::Vector{Float64})
  tempdiff = tempvec[2:length(tempvec)] .- tempvec[1:(length(tempvec)-1)]
  ub = sum(tempdiff .* ell[2:length(ell)])
  lb = sum(tempdiff .* ell[1:(length(ell)-1)])
  [lb ub]
end
Out[9]:
boundml (generic function with 1 method)

Now I can compute the log marginal likelihood from the series of tempered chains, and compare with the analytical result.

In [10]:
ppml(ell,vll,tempvec)
Out[10]:
3.6195674785397585
In [11]:
boundml(ell,tempvec)
Out[11]:
1x2 Array{Float64,2}:
 3.50298  3.73064

Stepping stone

The stepping stone approach also employs tempered distributions, but rather than integrating the expectation of the log likelihood, it employs importance sampling between adjacent tempered chains to calculate a series of normalising constants. The product of these normalising constants gives an estimate of the marginal likelihood. Here, I estimate the log ratio of marginal likelihoods.

In [12]:
lrss=0.0
for i in 2:numtemp
  tempdiff = tempvec[i]-tempvec[i-1]
  logmaxll = maximum(pplist[i][(burnin+1):(burnin+niters),2])
  oldll = pplist[i-1][(burnin+1):(burnin+niters),2]
  lrss = lrss+tempdiff*logmaxll
  lrss = lrss+log((1/length(oldll))*sum(exp(tempdiff*(oldll-logmaxll))))
end
lrss
Out[12]:
3.6237428915234613

As you can see, the code isn't that different between R and Julia (although the ability to do multiple assignment in a single line in Julia makes for more compact code). However, where they differ is in speed. The bottleneck is in running the MCMC across multiple temperatures. Firstly, let's calculate the time elapsed for a single run in Julia.

In [13]:
@elapsed out=met_temper(x,lambda0,alpha,beta,sigma,1.0,100000)
Out[13]:
0.312924833

In R, the equivalent code runs in about 2.7 seconds, or about ten times slower.


Monday, June 23, 2014

Marginal likelihood via power posteriors and stepping stones

Under the Bayesian paradigm, inference is based on the posterior probability over the parameters of interest. It’s helpful to think of our inferences being conditional on a given model, \(M\) with a parameter vector \(\theta \in \Theta\). Given a dataset, \(D\), and a model, the posterior distribution of the parameter values is given by Bayes’ theorem.

\[ \begin{align} Pr(\theta|D,M) = \frac{Pr(D|\theta,M)Pr(\theta|M)}{Pr(D|M)} \end{align} \]

\(Pr(D|\theta,M)\) is the likelihood function, \(Pr(\theta|M)\) is the prior probability , and \(Pr(D|M)\) is known as the marginal likelihood, predictive probability, or evidence. \(Pr(D|M)\) is a normalising constant that ensures that \(Pr(\theta|D,M)\) is a probability.

\[ \begin{align} Pr(D|M) = \int_{\Theta}\Pr(D|\theta,M)Pr(\theta|M) d\theta \end{align} \]

However, one often wants to compare the fit of different models. As a function of the model \(M\), the marginal likelihood can be interpreted as the likelihood of the model \(M\) given the data \(D\). Hence, to choose between several models, one simply chooses the one with the highest marginal likelihood. When comparing two models,say \(M_0\) and \(M_1\), a ratio of marginal likelihoods, known as the Bayes factor, \(K\), is usually defined:

\[ \begin{align} K_{01} = \frac{Pr(D|M_1)}{Pr(D|M_0)} \end{align} \]

Interpretations of the Bayes factor have been provided by Jeffreys and Kass and Raftery.

Unfortunately, in most cases, it is not possible to obtain an exact solution of the marginal likelihood. A number of approaches have been described to obtain an approximate numerical estimate of the marginal likelihood; here, I illustrate two approaches based on tempering. An Rmarkdown workbook is available as a gist.

A simple example

Before I describe what tempering means in this context, let’s consider a simple example, for which there is an analytical solution for the marginal likelihood. Consider the problem of fitting a set of \(n=100\) exponential random variables, \(X\) with parameter \(\lambda=3\).

We can generate these in R as follows.

set.seed(1)
lambd <- 3
n <- 100
x <- rexp(n,lambd)

The likelihood of the data given the rate parameter \(\lambda\) is as follows.

\[ \begin{align} Pr(X|\lambda) & = \prod_{i=1}^{n=100} \lambda \rm{exp}(-\lambda x_i) \cr & = \lambda^n \rm{exp}(-\lambda n \bar{x}) \end{align} \]

where \(\bar{x}\) is the sample mean of \(X\).

As described in Wikipedia (which is remarkably handy for distributions), if we assume a Gamma(\(\alpha\),\(\beta\)) prior on the rate coefficient, the posterior distribution of \(\lambda\) is Gamma(\(\alpha+n\),\(\beta+n \bar{x}\)), the conjugate prior for an exponential distribution.

\[ \begin{align} Pr(\lambda|X,\alpha, \beta) & \propto Pr(X|\lambda) \times Pr(\lambda| \alpha, \beta) \cr & = \lambda^n \rm{exp}(-\lambda n \bar{x}) \times \frac{\beta^\alpha}{\Gamma(\alpha)} \lambda^{\alpha-1} \rm{exp}(-\lambda \beta) \cr & = \frac{\beta^\alpha}{\Gamma(\alpha)} \lambda^{\alpha+n-1} \rm{exp}(-\lambda (\beta + n \bar{x})) \end{align} \]

The marginal likelihood of this model can be calculated by integrating \(Pr(X|\lambda) \times Pr(\lambda| \alpha, \beta)\) over \(\lambda\).

\[ \begin{align} \int_{\lambda=0}^{\infty}\frac{\beta^\alpha}{\Gamma(\alpha)} \lambda^{\alpha+n-1} \rm{exp}(-\lambda (\beta + n \bar{x})) \; d\lambda & = \frac{\beta^\alpha}{\Gamma(\alpha)} \int_{0}^{\infty}\lambda^{\alpha+n-1} exp(-\lambda (\beta + n \bar{x})) \; d\lambda \cr & = \frac{\beta^\alpha}{\Gamma(\alpha)} \frac{\Gamma(\alpha+n)}{(\beta+ n \bar{x})^{a+n}} \end{align} \]

The log marginal likelihood in R can be calculated as follows.

lml <- function(x,alph,bet){
  mux <- mean(x)
  n <- length(x)
  alph*log(bet)-(alph+n)*log(bet+n*mux)+lgamma(alph+n)-lgamma(alph)
}

For \(\alpha=1\) and \(\beta=1\), the log marginal likelihood for these data is around 3.6.

alph <- 1
bet <- 1
lml(x,alph,bet)
## [1] 3.627

In many cases, however, we don’t have an analytical solution to the posterior distribution or the marginal likelihood. To obtain the posterior, we can use MCMC with Metropolis sampling. I first define a Metropolis sampler which returns a vector of parameter values, log likelihood, log prior and log posterior. As described before, we have to be careful, as the parameter of the exponential distribution has to be positive. I use a simple random walk sampler, rejecting any values of \(\lambda<0\).

met <- function(x,lambda0,alph,bet,sigma,niters){
  lambdvec <- numeric(niters)
  llvec <- numeric(niters)
  lpvec <- numeric(niters)
  lpostvec <- numeric(niters)
  lambd <- lambda0
  ll <- sum(dexp(x,lambd,log=TRUE))
  lp <- dgamma(lambd,shape=alph,rate=bet,log=TRUE)
  lpost <- ll+lp
  for(i in 1:niters){
    lambds <- lambd+rnorm(1,mean=0,sd=sigma)
    if(lambds>0){
      lls <- sum(dexp(x,lambds,log=TRUE))
      lps <- dgamma(lambds,shape=alph,rate=bet,log=TRUE)
      lposts <- lls+lps
      A <- exp(lposts-lpost)
      if(runif(1)<A){
        lambd <- lambds
        ll <- lls
        lp <- lps
        lpost <- lposts
      }
    }
    lambdvec[i] <- lambd
    llvec[i] <- ll
    lpvec[i] <- lp
    lpostvec[i] <- lpost
  }
  return(list(lambdvec,llvec,lpvec,lpostvec))
}

To run the sampler, I provide an initial value for \(\lambda\), the standard deviation of the normal distribution used for the random walk, and the number of iterations.

lambda0 <- 1
sigma <- 1
niters <- 1000000
out <- met(x,lambda0,alph,bet,sigma,niters)

Now I can plot out the density and compare it with the analytical solution.

hist(out[[1]],100,freq=FALSE,main="",xlab=expression(lambda)) # lambda
mux <- mean(x)
curve(dgamma(x,shape=alph+n,rate=bet+n*mux),add=TRUE,col=2,lwd=2)

plot of chunk unnamed-chunk-6

The fit using MCMC gives us the posterior distribution, but not the marginal likelhood. While there are methods to obtain the log marginal likelihood from the sample from the posterior, they suffer from poor performance due to high (potentially infinite) variance.

Tempering

Several approaches to calculating marginal likelhoods are based on the idea of tempering, in which we consider running MCMC at a range of different (inverse) ‘temperatures’, obtaining by raising likelihood to a power between 0 and 1; when the power is 0, we sample from the prior, while when the power is 1, we sample from the posterior. While we use the tempered likelihood to determine acceptance probabilities in the MCMC, we will use samples of the untempered likelihood to compute the marginal likelihood.

met.temper <- function(x,lambda0,alph,bet,sigma,temp,niters){
  lambdvec <- numeric(niters)
  lltvec <- numeric(niters)
  llvec <- numeric(niters)
  lpvec <- numeric(niters)
  lpostvec <- numeric(niters)
  lambd <- lambda0
  ll <- sum(dexp(x,lambd,log=TRUE))
  llt <- temp*ll
  lp <- dgamma(lambd,shape=alph,rate=bet,log=TRUE)
  lpost <- llt+lp
  for(i in 1:niters){
    lambds <- lambd+rnorm(1,mean=0,sd=sigma)
    if(lambds>0){
      lls <- sum(dexp(x,lambds,log=TRUE))
      llst <- temp*lls
      lps <- dgamma(lambds,shape=alph,rate=bet,log=TRUE)
      lposts <- llst+lps
      A <- exp(lposts-lpost)
      if(runif(1)<A){
        lambd <- lambds
        ll <- lls
        llt <- llst
        lp <- lps
        lpost <- lposts
      }
    }
    lambdvec[i] <- lambd
    llvec[i] <- ll
    lltvec[i] <- llt
    lpvec[i] <- lp
    lpostvec[i] <- lpost
  }
  return(list(lambdvec,llvec,lltvec,lpvec,lpostvec))
}

I first run the chain at a range of temperatures. For efficiency, I start the chain setting \(\theta=1\) with an initial value of \(\lambda\) obtained from my original MCMC run. Then, with each subsequent value of \(\theta\), I start the tempered chain with an initial value chosen from the previous tempered chain. In this way, we reduce the number of simulations needed in order to get a representative sample from the log likelihood of the tempered chains.

tempvec <- seq(0,1,by=0.01)^5
numtemp <- length(tempvec)
pplist <- list()
burnin <- 1000
niters <- 10000
for(i in numtemp:1){
  l0 <- tail(out[[1]],1)
  out <- met.temper(x,l0,alph,bet,sigma,tempvec[i],niters+burnin)
  pplist[[i]] <- out
}

Power posteriors

The power posterior approach is based on integrating the expectation of the log likelihood (for a given chain) across the inverse temperatures (see Friel and Pettit (2008) and Lartillot and Philippe (2006). Friel and Pettitt used a trapezoidal scheme to integrate the likelihood, while a revised scheme used an improved trapezoidal scheme, which also requires the variance of the log likelihood.

I first extract the mean and the variance of the log likelihoods.

ell <- rep(0,numtemp)
vll <- rep(0,numtemp)
for(i in 1:numtemp){
  out <- pplist[[i]]
  ell[i] <- mean(out[[2]][burnin:niters])
  vll[i] <- var(out[[2]][burnin:niters])
}

Let’s plot this out.

plot(ell~tempvec,type="l",xlab="Temperature",ylab="Mean lnL")

plot of chunk unnamed-chunk-10

You should be able to anticipate a potential problem here with the numerical integration, associated with the rather uninformative prior.

The following uses this information to compute the log marginal likelihood, either using the modified or standard trapezoidal scheme.

ppml <- function(ell,vll,tempvec,modify=TRUE){
  N <- length(ell)
  res <- 0
  for(i in 1:(N-1)){
    wts <- tempvec[i+1]-tempvec[i]
    if(modify){ # Modified trapezoidal rule
      res <- res+wts*((ell[i+1]+ell[i])/2.0)-((wts^2)/12)*(vll[i+1]-vll[i])
    }else{
      res <- res+wts*((ell[i+1]+ell[i])/2.0)
    }
  }
  res
}

I can also obtain bounds on the log marginal likelihood from using the trapezoidal rule. Note that this does not take into account the error in estimating the expectation of the log likelihood, but it does give a nice easy way to evaluate at least one source of error.

boundml <- function(ell,tempvec){
  tempdiff <- tempvec[2:length(tempvec)]-tempvec[1:(length(tempvec)-1)]
  ub <- sum(tempdiff*ell[2:length(ell)])
  lb <- sum(tempdiff*ell[1:(length(ell)-1)])
  c(lb,ub)
}

Now I can compute the log marginal likelihood from the series of tempered chains, and compare with the analytical result.

ppml(ell,vll,tempvec,FALSE)
## [1] 3.616
ppml(ell,vll,tempvec,TRUE) # modified
## [1] 3.618
boundml(ell,tempvec)
## [1] 3.501 3.730

Stepping stone

The stepping stone approach also employs tempered distributions, but rather than integrating the expectation of the log likelihood, it employs importance sampling between adjacent tempered chains to calculate a series of normalising constants. The product of these normalising constants gives an estimate of the marginal likelihood. Here, I estimate the log ratio of marginal likelihoods.

lrss <- 0
for(i in 2:numtemp){
  tempdiff <- tempvec[i]-tempvec[i-1]
  logmaxll <- max(pplist[[i]][[2]][(burnin+1):(burnin+niters)])
  oldll <- pplist[[i-1]][[2]][(burnin+1):(burnin+niters)]
  lrss <- lrss+tempdiff*logmaxll
  lrss <- lrss+log((1/length(oldll))*sum(exp(tempdiff*(oldll-logmaxll))))
}
lrss
## [1] 3.617

The stepping stone estimator is biased for estimation of the log marginal likelihood, but overall appears to be slightly superior to the power posterior approach. However, as the computational burden is mostly due to the simulation of the tempered distributions, one can easily calculate both estimates.

Computing Bayes factors

To compare two models, we can calculate a Bayes factor, which is the ratio of their two marginal likelihoods. For simplicity, let us compare two models for the data, in which both assume an exponential distribution, but with different prior values for \(\alpha\) and \(\beta\), say \(\alpha_1=1,\beta_1=1\) versus \(\alpha_2=2,\beta_2=0.5\). In principle, we could assume completely different distributions and compare those instead, but this example allows us to use the code above.

alph2 <- 2
bet2 <- 0.5

The Bayes factor for this new model, compared to the old model, is about 3.05.

lbf <- lml(x,alph2,bet2)-lml(x,alph,bet)
lbf # Log Bayes factor
## [1] 1.116
exp(lbf) # Bayes factor
## [1] 3.053

Using marginal likelihoods separately

pplist2 <- list()
for(i in 1:numtemp){
  out <- met.temper(x,lambda0,alph2,bet2,sigma,tempvec[i],niters+burnin)
  pplist2[[i]] <- out
}

Now I can compare the Bayes factors obtained through simulation. Firstly, using power posteriors.

ell2 <- rep(0,numtemp)
vll2 <- rep(0,numtemp)
for(i in 1:numtemp){
  out <- pplist2[[i]]
  ell2[i] <- mean(out[[2]][burnin:niters])
  vll2[i] <- var(out[[2]][burnin:niters])
}
lbf.pp <- ppml(ell2,vll2,tempvec,TRUE)-ppml(ell,vll,tempvec,TRUE) # modified
lbf.pp
## [1] 1.109
exp(lbf.pp)
## [1] 3.032

Similarly, one could use a stepping stone approach to calculate the marginal likelihoods for each model, and hence the Bayes factor.

Computing the Bayes factor directly

One of the problems with calculating the Bayes factor using two separate marginal likelihoods is that errors in estimating the individual marginal likelihoods combine. This can be a serious issue when the prior is vague and/or the difference in the marginal likelihood is small. Another approach is to compute a bridge between the posterior distributions of the two models, so-called ‘model-switch integration’. The difference between this and the previous case is that we sample from a tempered posterior distribution, and integrate the difference between the untempered posterior values.

met.switch <- function(x,lambda0,alph1,bet1,alph2,bet2,sigma,temp,niters){
  lambdvec <- numeric(niters)
  lpost1vec <- numeric(niters)
  lpost2vec <- numeric(niters)
  lpostvec <- numeric(niters)
  lambd <- lambda0
  ll1 <- sum(dexp(x,lambd,log=TRUE))
  ll2 <- sum(dexp(x,lambd,log=TRUE))
  lp1 <- dgamma(lambd,shape=alph1,rate=bet1,log=TRUE)
  lp2 <- dgamma(lambd,shape=alph2,rate=bet2,log=TRUE)
  lpost1 <- ll1+lp1
  lpost2 <- ll2+lp2
  lpost <- (1-temp)*lpost1+temp*lpost2
  for(i in 1:niters){
    lambds <- lambd+rnorm(1,mean=0,sd=sigma)
    if(lambds>0){
      lls1 <- sum(dexp(x,lambds,log=TRUE))
      lls2 <- sum(dexp(x,lambds,log=TRUE))
      lps1 <- dgamma(lambds,shape=alph1,rate=bet1,log=TRUE)
      lps2 <- dgamma(lambds,shape=alph2,rate=bet2,log=TRUE)
      lposts1 <- lls1+lps1
      lposts2 <- lls2+lps2
      lposts <- (1-temp)*lposts1+temp*lposts2
      A <- exp(lposts-lpost)
      if(runif(1)<A){
        lambd <- lambds
        ll1 <- lls1
        ll2 <- lls2
        lp1 <- lps1
        lp2 <- lps2
        lpost1 <- lposts1
        lpost2 <- lposts2
        lpost <- lposts
      }
    }
    lambdvec[i] <- lambd
    lpost1vec[i] <- lpost1
    lpost2vec[i] <- lpost2
    lpostvec[i] <- lpost
  }
  return(list(lambdvec,lpost1vec,lpost2vec,lpostvec))
}

I run the tempered distributions, but now bridging between the two posterior distributions.

switchlist <- list()
for(i in 1:numtemp){
  out <- met.switch(x,lambda0,alph,bet,alph2,bet2,sigma,tempvec[i],niters+burnin)
  switchlist[[i]] <- out
}

This tends to be more linear than for the standard power posterior/thermodynamic integration approach, so it may make more sense to space out the temperatures more uniformly between 0 and 1. For simplicity, I’ve just kept the same temperature regime I used previously.

To calculate the log Bayes factor, I first calculate the difference in the posterior probabilities for each model, across the series of chains.

elpt <- rep(0,numtemp)
vlpt <- rep(0,numtemp)
for(i in 1:numtemp){
  out <- switchlist[[i]]
  diffp <- out[[3]][burnin:niters]-out[[2]][burnin:niters]
  elpt[i] <- mean(diffp)
  vlpt[i] <- var(diffp)
}

I can then use the same code as before to work out the log Bayes factor.

lbf.switch <- ppml(elpt,vlpt,tempvec,TRUE)
exp(lbf.switch)
## [1] 3.055
boundml(elpt,tempvec)
## [1] 1.116 1.118

Note how the model-switch integration gives an estimate much closer to the actual Bayes factor than using the difference between two marginal likelihoods.

I can also use a stepping stone approach.

lbfss <- 0
out <- switchlist[[1]]
diffp <- out[[3]][burnin:niters]-out[[2]][burnin:niters]
for(i in 2:numtemp){
  olddiffp <- diffp
  tempdiff <- tempvec[i]-tempvec[i-1]
  out <- switchlist[[i]]
  diffp <- out[[3]][burnin:niters]-out[[2]][burnin:niters]
  logmaxp <- max(diffp)
  lbfss <- lbfss+tempdiff*logmaxp
  lbfss <- lbfss+log((1/length(diffp))*sum(exp(tempdiff*(olddiffp-logmaxp))))
}
lbfss
## [1] 1.117

In another post, I’ll demonstrate the same principles, but in Julia.