I think because psychologists often deal with datasets that are relatively small, and they frequently come from experiments, they don’t consider the consequences of correlated predictor variables. If our experiments have balanced designs (i.e. the same number of participants in each cell), then we don’t need to worry about correlated predictors. However, if they’re not balanced, or if we’re using measured variables as predictors, as occurs in non or quasi-experimental research, then this is potentially a real concern.
Similarly, because our datasets are typically small, it is more difficult to pick out the consequences of working with these correlated variables. Were our datasets larger, the influence of correlated variables would be more obvious.
Recall the code we used to generate three predictors, \(X_{1}\), \(X_{2}\), and \(X_{3}\) and to simulate outcomes, \(y\). Below, I wrap that code in a function so I can call it without having to type or or paste in the huge block of code everytime I want to use it. Furthermore, I’m giving that function a couple of arguments - n and cor. The argument n is the number of observations we should generate when the function is called, and the argument cor is the correlation betwen X2 and X3. This function returns a dataframe with observations for the three variables, and the outcome. For this simulation I’ve changed the parameters such that they’re all the same - \(X_{1} = 1\), \(X_{2} = 1\), and \(X_{3} = 1\). Just like a psychological experiment, we want to maintain as much consistency as possible within the experiment, with the exception of the manipulation(s). This will help with our comparisons later.
Let’s take it for a test drive:
Looks like it works. Now, the reason why we’re doing this in the first place is because we want to run a kind of experiment. In psychology, that often means randomly assigning people to different treatment conditions. But here, we’re not studying people. We’re studying a modeling process. Thus, rather than manipullating the condition under which someone behaves, we manipulate the condition under which we model the data. Specifically, we’re going observe what happens when we model data that either does or does not feature correlated predictors. We can do this by repeatedly calling this function under each of those conditions, fitting a model, and recording the results.
This is a success, and we now have some data we can plot and examine. First, is there any difference between the coeficients due to our manipulated correlation between predictor variables?
Well, it seems pretty clear that our manipulation lead to changes in the standard errors and, as a consequence, our pvalues. This is also reflected in the plot of the coefficients, as you can see that the variability in estimates is larger for X2 and X3 when they’re correlated. We can test these observations more formally with a series of linear models based on the data. Note that I dropped the intercept term in these models, and made X1 the baseline comparison group.
I hope this illustrates the importance of considering whether your predictors are correlated. Having correlated predictors results in a dramatic decrease in the precision of their estimates, which in turn influences the power to detect an effect. While this problem is not typically so great in experiments (as conditions feature random assignment), it can sometimes be a problem if the data are dramatically unbalanced, or if you are including a covariate in the model.
Furthermore, these few tutorial can serve as an illustration of the power of simulation as a method of investigation. In my view, there are few better ways to understand your data than to try to think of the process that generated it and seeing if your assumptions are correct by simulating that process. With the minimal examples I’ve used here, it should be possible to think of other parameters to adjust.