Let’s hypothesize that eating ice cream cured depression, and that this effect was mediated by the sensory pleasure derived from eating the ice cream. We acquire a large grant from Big Ice Cream, bribe the IRB and are thus able to run the study of our ice cdreams.
In a large experiment, respondents are assigned to eat either ice cream or a clearly inferior food option (e.g., sauerkraut).  Served in a cone to ensure that experimental conditions are comparable. Next, they fill out a highly reliable and valid measure of sensory pleasure, followed some time later by a highly reliable, valid and practically useful measure of depression.  See also this thread by Eiko Fried, but…hey, one can dream!
Schrödinger’s Ice Cream
When we analyse the data, we observe a peculiar pattern: There is no difference in the level of depression between the ice cream and the sauerkraut group. However, we do find some sort of significant effect: ice cream has an indirect effect on depression, which is mediated by sensory pleasure. We conclude that while ice cream does not cure depression directly, it certainly does so indirectly.
This study is, regrettably, only a thought experiment, and the results are purely fictitious. But the described pattern of results isn’t.
Especially since the advent of PROCESS, we’ve encountered it multiple times while reviewing manuscripts and in the published literature. One could call it the indirect effect ex machina: the data may fail to support your central claim (ice cream reduces depression), but you can still somehow claim that you found the mechanism behind your unsupported claim (ice cream increases sensory pleasure which in turn decreases depression).
This is, of course, somewhat perplexing, and different explanations can be evoked. Wasn’t there something strange going on with the statistical power in mediation analysis?There was! See e.g. Kenny and Judd (2013) Maybe there are additional pathways from the intervention to the outcome that happen to cancel out the discovered indirect effect?Maybe ice cream leads to sensory pleasure which decreases depressiveness; but it also leads to dietary guilt which makes people unhappy? It’s certainly possible but color me (imaginarily) skeptical. Also, note that in such a scenario of effects that cancel each other out, you couldn’t claim that ice cream makes people less depressed.
A Less (N)ice Alternative Explanation
Here is another explanation that, in many scenarios, may be much more parsimonious and also somewhat trivial: the mediator and the outcome are confounded. Consider the following figure:
Figure 1. You (A) vs. the DAG she told you not to worry about (B)
Scenario A is what we assume when we run our (naïve) mediation analysis: ice cream has an effect on sensory pleasure which has an effect on depression. Multiply those two effects, BAM, and we discover the amazing indirect benefits of cold and delicious creamy treats. And we’ve conducted an experiment so confounding cannot be a problem, right? Right?!
But we have only randomized one part of the mediation, Ice Cream vs Sauerkraut.Also a good title for a video game in which you fight evil Germans.
But this does not mean that we can automatically get the causal effect of sensory pleasure on depression. Sensory pleasure has not been randomized, and it is more than plausible that it is confounded with the level of depression that people report. For example, some people may just be more prone to reporting that all sorts of things are good and fine,As it happens, this is also a contender for “most parsimonious alternative hypothesis explaining away 90% of positive psychology” including the sensory pleasure they derive from foods but also how they are doing more generally.
So the indirect effect that we calculate is the product of two coefficients: one of which we know is causal (ice cream vs Sauerkraut intervention → sensory pleasure),Although, as one commenter has pointed out correctly, from this we cannot conclude that ice cream cures depression because we don’t know what exactly about the treatment works. e.g., is it that Sauerkraut makes people depressed? Is it the creamy fat content of the ice cream? Maybe we accidentally chose exactly those flavors that have some beneficial side effects? and another one that could reflect all sorts of confounding. What do we get if we multiply something causal with something spurious? A match made in heaven spurious indirect effect leaving the taste of sauerkraut ice cream in your mouth.
“Wait a second”, you may think, “Total effect = indirect effect + direct effect, and the total effect is causally identified, so can’t we just calculate total effect minus direct effect to get the indirect effect?” But it turns out that this fails because the direct effect isn’t causally identified either. In standard mediation analysis, the direct effect is the effect of the intervention on the outcome, controlling for the mediator. Take another look at Figure 1, Panel B: There are two arrows pointing into Sensory Pleasure, one starting from Ice Cream, another one starting from Positivity Bias.
This means that Sensory Pleasure is a collider between Ice Cream and Positivity Bias. Controlling for Sensory Pleasure can introduce a spurious association between Ice Cream and Positivity Bias, so your “direct effect” will also be confounded by the Positivity Bias. This may seem counterintuitive — after all, Ice Cream has been randomized, so how could it be confounded? — but strange things like this happen when you control for the outcome of something that you are interested in.In the most simple data situation one can simulate, this would result in a positive effect of ice cream and depression. Maybe that would give you pause and make you reconsider your analyses (ice cream can’t possibly be bad, can it?!), but depending on your sample size, the spurious direct effect may end up non-significant anyway. And more complex data situations may result in patterns that don’t necessarily raise any red flags for the average ice-cream lover.
Of course, this whole problem does not only apply to scenarios in which there is no total effect. For example, we conducted a systematic review of Sauerkraut-mediation studies and found one in which drinking Sauerkraut juice led to increased endorsement of right-wing ideology, and this effect was mediated by the belief that one had consumed something healthy. From a causal inference perspective, the total effect is unsuspicious and hopefully motivates politicians to take action and ban such a questionable beverage which should not exist in the first place. But the indirect effect – which suggests a specific mechanism – might as well be spurious (which should be a relief to all beetroot-juice lovers out there).
However, I find the scenario in which there is no total effect more fascinating because it is one amazing research hack. Want to show that X affects Y? Just find a mediator that is affected by X and confounded with Y and you’re good to go! If you are having a hard time finding a suitable variable, just choose something that is really close to X but ensure that your measure of it shares method variance with your measure of Y.
More likely though, psychological researchers report and interpret spurious indirect effects because they aren’t aware of the issue. Having randomized something may result in the reassuring feeling that confounding can’t possibly be an issue anymore, even more so because causal inference training in psychology often seems to be limited to “correlation does not imply causation, go run an experiment instead.” Still, thinking of the vast spread of mediation analysis in the literature makes Smaldino and McElreath’s claim that “[s]ome normative methods of analysis have almost certainly been selected to further publication instead of discovery” sound even more uncanny.
I Scream, You Scream, We All Scream for… Better Mediation Analyses
How can we do better? Bullock, Green and Ha’s “Yes, but What’s the Mechanism? (Don’t Expect an Easy Answer)” offers a more in-depth analysis of the issues with mediation analysis, as well as some recommendations. The title of the paper is a massive spoiler though: mediation analysis done right is freakin’ hard. To properly support a mediational claim, you either need really strong assumptions (best paired with excellent knowledge of the underlying causal web) or the patience to carefully experimentally manipulate the mediator (paired with…again some rather strong assumptions).And a lot of ice cream to fight the depression caused by staring into the abyss of infinite regress.
One extreme conclusion could be “go big or go home”: Either invest the time and effort to do it right, or let go of fancy mediation claims and stick with simple main effects (those can be hard enough anyway). I’d be fine with a field-wide fancy-model hiatus, but this is going to be hard to sell. Psychologists are used to “cheap” mediation analysis and may even expect that every paper worthy of publication needs at least one fancy mediation model.
So, at the risk of pandering to the k̶̶r̶̶a̶̶u̶̶t̶ crowd, here’s a more vanilla sorbestion:
First, when doing a mediation analysis on data from an experiment, always report the total effect on the outcome of interest.So maybe Baron & Kenny were onto something when recommending their guidelines, which later work (cited a gazillion times) criticised as having ended research lines ‘prematurely’. A simple mean comparison between experimental groups may seem boring; it doesn’t provide an answer to the fancy questions one would like to ask — but it provides the correct answer to the fundamental question of whether or not the manipulation affects the outcome.As we Germans like to say: Rather a main effect in the hand than an indirect effect on the roof.
Second, discuss the assumptions underlying the interpretation of the mediation paths — in particular the assumption that there are no unobserved confounders between the mediator and the outcome. When spelled out explicitly, those assumptions may seem rather ridiculous, so maybe remind readers that all mediation analyses following the standard protocol require these assumptions. Getting people to pause and think a bit more about a common analysis approach may be a modest first step towards better practices.
Third, be wary of mediation in the absence of a total effect. There may be scenarios in which it makes sense, but confounding may be the more plausible alternative explanation in other scenarios.For an example of this discussion being had explicitly and in the open, see the recent back and forth between Gangestad et al. and Stern et al. (including a 100% CI member) over the merit in testing salivary steroids as mediators of a non-significant effect of fertile phase on preferences for masculinity.
Fourth, go get some ice cream! If it has any negative indirect effects, finding out about them will be next to impossible anyway. And the direct effects are worth it.
For more information, please see more information about La machina ice cream