(Based on code of Anna Neufeld at the University of Washington)
This tutorial demostrates how to apply the Bayesian Melding method to results from a microsimulation model in order to access uncertainty around deterministic simulation results.
Download or clone this repository using for example (from the command line):
git clone https://github.com/hanase/BMlabsRostock2019 BMlabs
Here, we assume we have a microsimulation model for projecting the number of housing units in US states, starting from 2010 and projecting to 2030. We run the model several times under different input conditions. We have observed data for 2017 which we use to assess the model uncertainty and propagate it into 2030.
Most of the analysis will be done using basic R. Towards the end we will use the ggplot2 package which can be installed via:
install.packages("ggplot2")
We have a dataset of the number of housing units in 2017 for US states from the Census Bureau (ACS).
Move into the downloaded directory and load the observed data:
getwd()
setwd("BMlabs")
observed <- read.csv("observedHU.csv")
head(observed)
In our case, we consider our model as a black box and run it five times, i.e. \(I = 5\) (done in advance). Load simulated data from these 5 runs, one file for results in 2017 (“present” time) and one for results in 2030.
sim17 <- read.csv("simulatedHU2017.csv")
head(sim17)
sim30 <- read.csv("simulatedHU2030.csv")
head(sim30)
We are using 5 runs and 52 groups:
I <- ncol(sim17)
K <- nrow(sim17)
cat("\nUsing", I, "runs and", K, "groups.")
Apply transformation to derive \(\mu_{ik}\), \(\Psi_{ik}\), and \(y_{k}\):
mu <- sqrt(sim17)
psi <- sqrt(sim30)
y <- sqrt(observed)[,1] # convert to vector
Estimate overall bias as \(a = \frac{1}{IK} \sum_{i,k}(y_k - \mu_{ik})\):
a <- sum(y - mu)/(K*I)
a
Compute variance for each run as \(\sigma^2_{i} = \frac{1}{K} \sum_{k}(y_k - a - \mu_{ik})^2\):
sigma.sq <- apply((y - a - mu)^2, 2, mean)
sigma.sq
A weight \(w_i\) for run \(i\) is proportional to the product of densities of \(N(a + \mu_{ik}, \sigma_i^2)\) (at \(y_k\)) over \(k\), which can be obtained using the following function:
compute.weight <- function(means, y, var)
prod(dnorm(x = y, mean = means, sd = sqrt(var)))
To compute weight for the first run, you would do
compute.weight(a + mu[,1], y, sigma.sq)
Derive weights for all runs \(i\) and normalize to sum to 1:
w <- apply(a + mu, 2, compute.weight, y = y, var = sigma.sq)
w <- w/sum(w)
w
The resulting distribution is \[\pi(\mu_k) = \sum_{i=1}^I w_iN(a + \mu_{ik}, \sigma_i^2)\] Plot the distribution for selected states, including the 80% probability interval (dashed) and the observed value (red line). The function plot.mixtures()
from the bmaquant.R
file (included in the directory) plots a mixture of normal components given by the components’ weights, means and variances:
source("bmaquant.R")
par(mfrow = c(2, 3))
for(k in 1:6) {
plot.mixtures(w, means = as.numeric(a + mu[k,]), vars = sigma.sq,
transform = TRUE, ci = 80, main = rownames(mu)[k],
xlab = "housing units", ylab = "density")
abline(v = observed[k,], col = "red")
}
Set propagation factors for the bias and variance. We assume that our model started in 2010. In order not to greatly deviate from the simulated results, we will keep the bias constant.
bv <- (2030 - 2010)/(2017 - 2010)
ba <- 1
Posterior means and variance:
means30 <- ba * a + psi
var30 <- bv * sigma.sq
Plot posterior distribution of housing units in 2030 for selected states:
par(mfrow = c(2, 3))
for(k in 1:6) {
plot.mixtures(w, means = as.numeric(means30[k,]), vars = var30,
transform = TRUE, ci = 80, main = rownames(mu)[k],
xlab = "housing units", ylab = "density")
}
Find the median for Alaska. We’ll use the function bmaquant()
from bmaquant.R
which returns the given quantile from a mixture of normal components given by weights, means and variances:
head(bmaquant)
bmaquant(0.5, w, means = as.numeric(means30["Alaska",]), vars = var30)^2
Find the medians and the 95% probability intervals for all states:
quants <- NULL
for(k in 1:K) {
low <- bmaquant(0.025, w, means = as.numeric(means30[k,]), vars = var30)
high <- bmaquant(0.975, w, means = as.numeric(means30[k,]), vars = var30)
median <- bmaquant(0.5, w, means = as.numeric(means30[k,]), vars = var30)
quants <- rbind(quants,
data.frame(low = low^2, high = high^2, median = median^2))
}
quants$state <- rownames(sim30)
head(quants)
Plot probability intervals as error bars for selected states, including the raw simulation results:
library(ggplot2)
# subset records to plot
qdata <- subset(quants, low > 1000000 & high < 3000000)
# convert simulated results into long format
# and merge with quantile data
sim30wide <- sim30
sim30wide$state <- rownames(sim30)
head(sim30wide)
sim30long <- reshape(sim30wide, direction = "long",
varying = 1:5, v.names = "S")
qdata <- merge(qdata, sim30long)
# plot quantiles as bars and simulated data as dots
g <- ggplot(qdata, aes(x = state)) +
geom_errorbar(aes(ymin = low, ymax = high), color = "red") +
geom_point(aes(y = median), color = "red") +
geom_point(aes(y = S), size = 0.7) +
xlab("") + ylab("housing units") +
scale_x_discrete(limits = rev(levels(factor(qdata$state)))) +
coord_flip()
print(g)
Having a distribution function available, one can extract any probability of interest. For example, derive the probability that Kansas will have less than 1,320,000 housing units by 2030. We will use the bmacdf()
function that returns the cumulative distribution function of a mixture of normal components at a given point:
bmacdf
bmacdf(sqrt(1320000), w, as.numeric(means30["Kansas",]), var30)
# compare with simulated values
sim30["Kansas",]
In a situation when there are more than one indicators on which basis we want to assess the uncertainty and compute weights, one can proceed as follows:
For each indicator \(l\) with \(l = 1,\dots,L\), compute the bias \(a_l\), variance \(\sigma^2_{il}\) and the weight \(w'_{il}\) as above.
For each run \(i\) compute its weight as \(w_i = \prod_l w'_{il}\).
The posterior distribution for indicator \(l\) is defined as \[\pi(\Psi_{kl}) = \sum_{i=1}^I w_iN(a_l b_l^a + \Psi_{ikl}, \sigma_{il}^2b_l^v)\]