Skip to content

Commit

Permalink
Update with differetn covariable values
Browse files Browse the repository at this point in the history
  • Loading branch information
omarsilverman committed May 26, 2024
1 parent 57e21e4 commit ebd37ef
Showing 1 changed file with 92 additions and 47 deletions.
139 changes: 92 additions & 47 deletions sessions/causal-mediation-analysis-sensitivity-analysis.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -6,30 +6,34 @@
```
Unmeasured or uncontrolled confounding is a common problem in
observational studies. This is a challenge to observational research
even in the analysis of total effects
even in the analysis of total effects.

When we are interested in pathways and direct and indirect effects, the
assumptions about confounding that are needed to identify these effects
are even stronger than for total effects.

We might be worried that these assumptions are violated and that our
estimates are biased
estimates are biased.

**Sensitivity analysis techniques can help assess HOW ROBUST results are
to violations in the assumptions being made** These techniques assess
the extent to which an unmeasured variable (or variables) would have to
affect both the exposure and the outcome in order for the observed
associations between the two to be attributable solely to confounding
rather than a causal effect of the exposure on the outcome It can also
be useful in assessing a plausible range of values for the causal effect
of the exposure on the outcome corresponding to a plausible range of
assumptions concerning the relationship between the unmeasured
confounder and the exposure and outcome

##Sensitivity analysis for unmeasured confounding for total effects
Consider the following figure in which U represents an unmeasured
to violations in the assumptions being made.**

These techniques assess the extent to which an unmeasured variable (or
variables) would have to affect both the exposure and the outcome in
order for the observed associations between the two to be attributable
solely to confounding rather than a causal effect of the exposure on the
outcome.

It can also be useful in assessing a plausible range of values for the
causal effect of the exposure on the outcome corresponding to a
plausible range of assumptions concerning the relationship between the
unmeasured confounder and the exposure and outcome.

## Sensitivity analysis for unmeasured confounding for total effects

Consider the following figure in which *U* represents an unmeasured
confounder, *C* measured covariables, *A* the exposure and *Y* the
outcome
outcome.

```{r, echo = FALSE}
# Creating The causal diagram for a mediation model
Expand Down Expand Up @@ -73,66 +77,108 @@ measured covariables *C* versus the true causal effect.
### Continuous outcomes

Suppose then we have obtained an estimate of the effect of the exposure
A on the outcome Y conditional on measured covariables C using
*A* on the outcome *Y* conditional on measured covariables *C* using
regression analysis.

We will define the bias factor B_add(*c*) on the additive scale as the
difference between the expected differences in outcomes comparing A = a
and A = a\* conditional on covariables *C* = *c* and what we would have
obtained had we been able to adjust for *U* as well.
We will define the bias factor **B_add(*c*)** on the additive scale as
the difference between the expected differences in outcomes comparing
*A* = *a* and *A* = *a*^\*^ conditional on covariables *C* = *c* and
what we would have obtained had we been able to adjust for *U* as well.

If the exposure is binary, then we simply have *a* = 1 and *a* \* = 0.
If the exposure is binary, then we simply have *a* = 1 and *a*^\*^ = 0.

A simple approach to sensitivity analysis is possible if we assume that
(A3.1) *U* is binary and (A3.2) that the effect of *U* (on the additive
scale) is the same for those with exposure level *A* = *a* and exposure
level *A* = *a* \* (no *U* × *A* interaction).
**(A8.1.1)** *U* is binary and **(A8.1.2)** that the effect of *U* (on
the additive scale) is the same for those with exposure level *A* = *a*
and exposure level *A* = *a*^\*^ (no *U* × *A* interaction).

If these assumptions hold, let γ be the effect of *U* on *Y* conditional
on *A* and *C*, that is:

$γ = E(Y|a, c,U = 1)$ − $E(Y|a, c,U = 0)$
$γ = E(Y|a,c,U = 1)$ − $E(Y|a,c,U = 0)$

Note that by assumption **(A8.1.2)**,

Note that by assumption (A3.2), $γ = E(Y|a, c,U = 1)$ −
$E(Y|a, c,U = 0)$ is the same for both levels of the exposure of
interest. Note also that *γ* is the effect of *U* on *Y* already having
adjusted for *C*; that is, in some sense the effect of *U* on *Y* not
through *C*
$γ = E(Y|a,c,U = 1)$ − $E(Y|a,c,U = 0)$

is the same for both levels of the exposure of interest.

Note also that *γ* is the effect of *U* on *Y* already having adjusted
for *C*; that is, in some sense the effect of *U* on *Y* not through *C*

Now let *δ* denote the difference in the prevalence of the unmeasured
confounder *U* for those with *A*=*a* versus those with *A* = *a* \*,
confounder *U* for those with *A*=*a* versus those with *A* = *a*^\*^,
that is:

$δ = P(U = 1|a, c)$ − $P(U = 1|a*, c)$
$δ = P(U = 1|a,c)$ − $P(U = 1|a^*,c)$

Under assumptions (A3.1) and (A3.2), the bias factor is simply given by
the product of these two sensitivity analysis parameters:
Under assumptions **(A8.1.1)** and **(A8.2.2)**, the bias factor is
simply given by the product of these two sensitivity analysis
parameters:

$B_add(c) = γ δ$
$B_add(c) = γδ$

Thus to calculate the bias factor we only need to specify the effect of
U on Y and the prevalence difference of U between the two exposure
*U* on *Y* and the prevalence difference of *U* between the two exposure
groups and then take the product of these two parameters.

Once we have calculated the bias term B_add(c), we can simply estimate
our causal effect conditional on C and then subtract the bias factor to
get the "corrected estimate"--- that is, what we would have obtained if
we had controlled for C and U.
Once we have calculated the bias term **B_add(c)**, we can simply
estimate our causal effect conditional on *C* and then subtract the bias
factor to get the "corrected estimate"--- that is, what we would have
obtained if we had controlled for *C* and *U*.

Under these simplifying assumptions (A3.1) and (A3.2), we can also get
adjusted confidence intervals by simply subtractingγ δ from both limits
of the estimated confidence intervals
Under these simplifying assumptions **(A8.1.1)** and **(A8.1.2)**, we
can also get adjusted confidence intervals by simply subtracting *γδ*
from both limits of the estimated confidence intervals.

We may not believe any particular specification of the parameters γ and
δ, but we could vary these parameters (based on expert knowledge or
previous studies reporting estimates of the associations of the C and Y)
We may not believe any particular specification of the parameters *γ*
and *δ*, but we could vary these parameters (based on expert knowledge
or previous reported estimates of the associations of the *C* and *Y*)
over a range of plausible values to obtain what were thought to be a
plausible range of corrected estimates.

Using this technique, we could also examine how substantial the
confounding would have to be to explain away an effect (we could do this
for the estimate and confidence interval).

### Continuous Outcome with Different Sensitivity Analysis Parameters for Different Covariate Values

Suppose now that instead of focusing on effects conditional on a
particular covariate value *C* = *c* or specifying the sensitivity
analysis parameters *γ* and *δ* to be the same for each covariable *C*,
we were interested in the overall marginal effect averaged over the
covariables and we wanted to specify different sensitivity analysis
parameters for different covariable levels.

Suppose then for each level of the covariates of interest *C* = *c* we
specified a value for the effect of *U* on *Y*

$γ(c) = E(Y|a, c,U = 1) − E(Y|a, c,U = 0)$

and also a value for the prevalence difference of *U* between those with
exposure status *A* = *a* and *A* = *a*^\*^ and covariables *C* = *c*

$δ(c) = P(U = 1|a, c)−P(U = 1|a^*, c)$

We could then obtain an overall bias factor, **Badd**, by taking the
product of the bias factors in each strata of *C* and then averaging
these over *C*, weighting each strata of *C* according to what
proportion of the sample was in that strata. The overall bias factor is
then

$Badd=\sum~c~\{γ(c)δ(c)\}P(C=c)$

We could then subtract this overall bias factor from our estimate
adjusted only for *C* to obtain a corrected estimate.

In this case, however, we can now longer simply subtract the bias factor
from both limits of the confidence intervals because this does not take
into account the variability in our estimates of the proportion of the
sample in each strata of the covariates *P(C = c*).

Corrected confidence intervals could instead be obtained by
bootstrapping.

```{=html}
<!-- - binary outcome
Expand All @@ -148,4 +194,3 @@ Coding part - Omar
#check the mediation package and the CMAverse package for sensitivity analysis
-->
```

0 comments on commit ebd37ef

Please sign in to comment.