Is it good science to manipulate the experiment until you find an effect?

There’s a different energy fight brewing in Canada over hydroelectric power. Canada has $50 billion in new hydro projects proposed or under construction, and plans to ship a lot of that electricity to the U.S. Unlike the oil sands, hydro is renewable, but it’s not necessarily green.Roberta Benefiel is sitting on a rock overlooking the Churchill River in central Labrador, a rural scenic region of Northeast Canada. She has a picture of her parents sitting on this same rock and now she’s looking out over the same waterfall that’s in the photo. She’s wistful. These rapids will disappear if a planned dam on this site gets built.” (Junkscience.com)

This vivid account of the efforts of the Canadian government to be more “green” are troubling. There is a distorted logic to its aims, as with many bad things there was a good aim at the beginning to use less fossil fuels. A renewable energy source is hydro-electric power and Canada has plenty of lakes except that instead of building around the natural features instead power tools and diggers do the work using fossil fuels (indirectly). The renewable energy that then comes about has to be financially viable for the energy company and so is sold to the US for a price, so the only benefit Canada receives is financial. Somewhere along the line the sound logic and good intentions disappeared. 

How then does this apply to Psychology and its applied research. An example is Sampling, research aims to give the clearest answer with the greatest amount of external validity. The optimum then would be to test the entire population unfortunately if you have done research you know this to be impossible. Simply, you have to scale back to a smaller slice of the pie and generalize, this is a sample and generally the larger the sample, the more viable the results are in generalizing. In effect, larger samples lead to increased precision when estimating unknown parameters (Wikipedia). So should you not find an effect in your experiment should you simply repeat with an increased sample size?

 

Not necessarily, even though the law of large numbers and central limit theorem supports this and theoretically should lead to increased efficacy in your results (Chi, Hung & Wang, 2004). The reason being something called dependence, which you know as correlation. Sample size is an element of a correlation study but it is not as integral as it is to say a clinical trial (O’Neill, 2006, Ware et al., 2009). If there is an effect in correlation study then it should be true despite the sample size after an acceptable amount, if the effect is proven at n=100 or n=150 then it should be true to p=.05 at n=1000. There are of course examples of where sample size increases are very helpful, studies with multiple dependent or independent variables and alternatives such as meta-analyses (Mulrow, 1994).

The main point of this blog is not to make you double check your sampling decisions but instead to think more broadly. Let’s take meta-analyses for example their goal is to find the true effect throughout a variety of papers, it does this not by replicating each of these with larger samples but finding common themes across a majority in methodology, aims, etc. Whilst this does increase the error for a statistical analysis past insane levels on a qualitative understanding sense which would be muddied by increasing the samples of each to replicate. Effectively like the opening statement, a flawed stand-alone experiment with a clear aim and methodology is better interpreted voraciously rather than manipulated until it beats a confidence level.

Remember statistically significant does not mean empirically significant! 

Advertisements

4 thoughts on “Is it good science to manipulate the experiment until you find an effect?

  1. I really like how you’ve used a real world example to show what you are trying to present in the blog before you actually explain it. Referring to your last paragraph regarding meta-analyses I think they can often be very biased, even though they are meant to present an overall picture of results. The studies they use will vary greatly in the methodologies and outcomes, so it is difficult to directly compare. But also, a meta-analysis only uses published results to base their findings on. This is a huge bias on its own because the majority of the results published are of positive results (which have significant findings), whereas in reality around 60% of experiments conducted do not find this result (The All Results Journal; Morgan, 2010). This means that even if papers that were selected were all investigating the same thing using very similar methodologies there would already be a huge bias within the findings and would not accurately display the true results of all of the investigations. This is just another example of how publications can be manipulated to show significant effects even if it is not empirically true.

    Morgan: http://www.timeshighereducation.co.uk/story.asp?storycode=411323
    All Results Journal: http://www.arjournals.com/ojs/

  2. Pingback: Comments for Naomi for Blog 3 « psucd8

  3. Thanks for kind words and all that jazz, I do agree that generally that meta analyses are like all statistics, a dangerous tool that can be mis-used. The researcher’ s personal spin will always influence what research is chosen or how it is interpreted and the overall message differs from person to person.
    An example of this is meta-analyses of Abraham Lincoln and his attitude to religion, everybody differs on whether he is or not , the majority using the same sources.

    Yet Derderian et al, 1997 found the difference between large, randomized trials and corresponding meta-analyses, only 5 of 40 were statistically significant. This points to the majority of meta-analyses are well researched and written therefore good scientific rigour determines a lot of the viability of this as a research methodology.

  4. Pingback: Homework for TA (14/3/2012) 23:14 « psud22psych

Wax Lyrical

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s