There’s a different energy fight brewing in Canada over hydroelectric power. Canada has $50 billion in new hydro projects proposed or under construction, and plans to ship a lot of that electricity to the U.S. Unlike the oil sands, hydro is renewable, but it’s not necessarily green.“Roberta Benefiel is sitting on a rock overlooking the Churchill River in central Labrador, a rural scenic region of Northeast Canada. She has a picture of her parents sitting on this same rock and now she’s looking out over the same waterfall that’s in the photo. She’s wistful. These rapids will disappear if a planned dam on this site gets built.” (Junkscience.com)
This vivid account of the efforts of the Canadian government to be more “green” are troubling. There is a distorted logic to its aims, as with many bad things there was a good aim at the beginning to use less fossil fuels. A renewable energy source is hydro-electric power and Canada has plenty of lakes except that instead of building around the natural features instead power tools and diggers do the work using fossil fuels (indirectly). The renewable energy that then comes about has to be financially viable for the energy company and so is sold to the US for a price, so the only benefit Canada receives is financial. Somewhere along the line the sound logic and good intentions disappeared.
How then does this apply to Psychology and its applied research. An example is Sampling, research aims to give the clearest answer with the greatest amount of external validity. The optimum then would be to test the entire population unfortunately if you have done research you know this to be impossible. Simply, you have to scale back to a smaller slice of the pie and generalize, this is a sample and generally the larger the sample, the more viable the results are in generalizing. In effect, larger samples lead to increased precision when estimating unknown parameters (Wikipedia). So should you not find an effect in your experiment should you simply repeat with an increased sample size?
Not necessarily, even though the law of large numbers and central limit theorem supports this and theoretically should lead to increased efficacy in your results (Chi, Hung & Wang, 2004). The reason being something called dependence, which you know as correlation. Sample size is an element of a correlation study but it is not as integral as it is to say a clinical trial (O’Neill, 2006, Ware et al., 2009). If there is an effect in correlation study then it should be true despite the sample size after an acceptable amount, if the effect is proven at n=100 or n=150 then it should be true to p=.05 at n=1000. There are of course examples of where sample size increases are very helpful, studies with multiple dependent or independent variables and alternatives such as meta-analyses (Mulrow, 1994).
The main point of this blog is not to make you double check your sampling decisions but instead to think more broadly. Let’s take meta-analyses for example their goal is to find the true effect throughout a variety of papers, it does this not by replicating each of these with larger samples but finding common themes across a majority in methodology, aims, etc. Whilst this does increase the error for a statistical analysis past insane levels on a qualitative understanding sense which would be muddied by increasing the samples of each to replicate. Effectively like the opening statement, a flawed stand-alone experiment with a clear aim and methodology is better interpreted voraciously rather than manipulated until it beats a confidence level.
Remember statistically significant does not mean empirically significant!