Skip to contents

Create a type of strategy class for each modelling approach.

Usage

strategy_maic(formula = NULL, R = 1000)

strategy_stc(formula = NULL)

strategy_gcomp_ml(formula = NULL, R = 1000)

strategy_gcomp_stan(formula = NULL)

strategy_mim(formula = NULL)

new_strategy(strategy, ...)

Arguments

formula

Linear regression formula object

R

The number of resamples used for the non-parametric bootstrap

strategy

Class name from strategy_maic, strategy_stc, strategy_gcomp_ml, strategy_gcomp_stan

...

Additional arguments

ald

Aggregate-level data

Value

maic class object

stc class object

gcomp_ml class object

gcomp_stan class object

mim class object

Matching-adjusted indirect comparison (MAIC)

MAIC is a form of non-parametric likelihood reweighting method which allows the propensity score logistic regression model to be estimated without IPD in the AC population. The mean outcomes \(\mu_{t(AC)}\) on treatment \(t = A,B\) in the AC target population are estimated by taking a weighted average of the outcomes \(Y\) of the \(N\) individuals in arm \(t\) of the AB population.

Used to compare marginal treatment effects where there are cross-trial differences in effect modifiers and limited patient-level data.

$$ \hat{Y}_{} = \frac{\sum_{i=1}^{N} Y_{it(AB)} w_{it}}{\sum_{i=1}^{N} w_{it}} $$ where the weight \(w_{it}\) assigned to the \(i\)-th individual receiving treatment \(t\) is equal to the odds of being enrolled in the AC trial vs the AB trial.

Simulated treatment comparison (STC)

Outcome regression-based method which targets a conditional treatment effect. STC is a modification of the covariate adjustment method. An outcome model is fitted using IPD in the AB trial

$$ g(\mu_{t(AB)}(X)) = \beta_0 + \beta_1^T X + (\beta_B + \beta_2^T X^{EM}) I(t=B) $$ where \(\beta_0\) is an intercept term, \(\beta_1\) is a vector of coefficients for prognostic variables, \(\beta_B\) is the relative effect of treatment B compared to A at \(X=0\), \(\beta_2\) is a vector of coefficients for effect modifiers \(X^{EM}\) subvector of the full covariate vector \(X\)), and \(\mu_{t(AB)}(X)\) is the expected outcome of an individual assigned treatment \(t\) with covariate values \(X\) which is transformed onto a chosen linear predictor scale with link function \(g(\cdot)\).

G-computation maximum likelihood

G-computation marginalizes the conditional estimates by separating the regression modelling from the estimation of the marginal treatment effect for A versus C. First, a regression model of the observed outcome \(y\) on the covariates \(x\) and treatment \(z\) is fitted to the AC IPD:

$$ g(\mu_n) = \beta_0 + \boldsymbol{x}_n \boldsymbol{\beta_1} + (\beta_z + \boldsymbol{x_n^{EM}} \boldsymbol{\beta_2}) \mbox{I}(z_n=1) $$ In the context of G-computation, this regression model is often called the “Q-model.” Having fitted the Q-model, the regression coefficients are treated as nuisance parameters. The parameters are applied to the simulated covariates \(x*\) to predict hypothetical outcomes for each subject under both possible treatments. Namely, a pair of predicted outcomes, also called potential outcomes, under A and under C, is generated for each subject.

By plugging treatment C into the regression fit for every simulated observation, we predict the marginal outcome mean in the hypothetical scenario in which all units are under treatment C:

$$ \hat{\mu}_0 = \int_{x^*} g^{-1} (\hat{\beta}_0 + x^* \hat{\beta}_1 ) p(x^*) dx^* $$ To estimate the marginal or population-average treatment effect for A versus C in the linear predictor scale, one back-transforms to this scale the average predictions, taken over all subjects on the natural outcome scale, and calculates the difference between the average linear predictions:

$$ \hat{\Delta}^{(2)}_{10} = g(\hat{\mu}_1) - g(\hat{\mu}_0) $$

G-computation Bayesian

The difference between Bayesian G-computation and its maximum-likelihood counterpart is in the estimated distribution of the predicted outcomes. The Bayesian approach also marginalizes, integrates or standardizes over the joint posterior distribution of the conditional nuisance parameters of the outcome regression, as well as the joint covariate distribution.

Draw a vector of size \(N*\) of predicted outcomes \(y*z\) under each set intervention \(z* \in \{0, 1\}\) from its posterior predictive distribution under the specific treatment. This is defined as \(p(y*_{z*} | \mathcal{D}_{AC}) = \int_{\beta} p(y*_{z*} | \beta) p(\beta | \mathcal{D}_{AC}) d\beta\) where \(p(\beta | \mathcal{D}_{AC})\) is the posterior distribution of the outcome regression coefficients \(\beta\), which encode the predictor-outcome relationships observed in the AC trial IPD.

This is given by:

$$ p(y*_{z*} \mid \mathcal{D}_{AC}) = \int_{x*} p(y* \mid z*, x*, \mathcal{D}_{AC}) p(x* \mid \mathcal{D}_{AC}) dx* $$

$$ = \int_{x*} \int_{\beta} p(y* \mid z*, x*, \beta) p(x* \mid \beta) p(\beta \mid \mathcal{D}_{AC}) d\beta dx* $$ In practice, the integrals above can be approximated numerically, using full Bayesian estimation via Markov chain Monte Carlo (MCMC) sampling.

Multiple imputation marginalization (MIM)