Awaiting moderation.


In the last decade or so there has been heightened controversy over significant testing despite its popularity especially in psychology. See, The Controversy over Null hypothesis Significance Testing Revisited for more information. There are in fact alternatives to significance testing and I will be discussing two of these, effect sizes and confidence intervals.

An effect size is the measured magnitude of the effect the independent variable induces on the dependent variable. It is this ‘magnitude of the effect’ that makes effect sizes good alternatives to significant testing. Significance test do not yield the size of the effect but such information is important to really understand data and be able to make inferences.   Unlike p-values effect sizes are not affected by the increasing size of the sample and so are considered more reliable. Effect size can also be helpful when comparing the results of different studies. However, it should be noted that effect sizes are dependent on the size of the manipulation used in experiments.  So if reported the magnitude of the experimental manipulation must also be included. Also, they only tell us the size of the effect but no information on the probability of the result in the actual population, which is what the significance tests do.

Then there is confidence intervals, which represent ‘a range of scores constructed such that the population mean falls within this range’ (Andy Fields).  For example a confidence interval of 95% represents a range in which we are confident that 95% of the time the true population mean will fall within that range. Confidence intervals are good in that they produce same information produced by significance test and more. They show where exactly the the population parameter will be, can be even more precise in that the smaller the interval the more confident you can be that the true value is within the limits. Although a great alternative to significance testing this method comes with limitations. So far there isn’t any software packages to use it on its own. And there is the publishing dilemma, significance test must be used for papers to get published and must yield a significant result (file drawer problem). Due to the fact confidence intervals are often very broad it is hard to accurately estimate parameters.

In conclusion, although these alternatives yield a magnitude of effect size and an exact range of where the true value will fall, information not provided by significant tests alone, i think they should be used to compliment significance testing not to replace it.

This week’s blog will be discussing why this statement is wrong, applied research findings are not more valuable than theoretical research findings.

Before I go into that I would like to define theoretical and applied research findings. The latter, refers to findings that can be directly applied to real world problems, derived from research that seeks to find solutions to such problems. For example, Burke, et. al, (2011) used applied research to answer the practical question of how hazards and safety training influence ones learning and performance in the work place. Such answers from the research could be used to design and deliver safety interventions that could potential improve people’s lives.

The former, theoretical research findings (also known as basic research findings), refers to the findings in psychology that add to our scientific knowledge derived from research that seeks to answer theoretical questions and test hypothesis. For example, the theories of attention, such as the theory of the attention spotlight it helps us to understand human behaviour, how one’s direction of gaze doesn’t always point the direction of their attention (Posner, 1980).

Politicians among other criticizers, devalue basic research labeling it as useless and not worth taxpayers’ money because it is not directly applicable to their lives. (Schvartzman and Schvartzman, 2008)

However this is not true, basic research can be applied to the real world, it just takes time and sometimes by accident. Who knew, the many basic research findings that came before the discovery of chlorpromazine, would one day be applicable to people who suffered from schizophrenia.(Stanovich, pg.106-107)

Applied research can indeed, be driven by basic research. For example, Strayer and Johnston (2004) conducted an applied research on driving safely using theories derived from basic research of attention. Also, applied research findings can support previous basic research for example theories of attention and produce further research questions. Therefore both, theoretical and applied research can be used together to gain a better understanding of an issue and seek to resolve it.

To conclude, neither applied nor basic research findings are more valuable than the other, both methods are compatible and play an important role in the scientific community. Basic research findings provide a foundation and applied research uses such a foundation to find ways to improve real life situations.



Gravetter and Forzano (2009)research methods

So it’s just suddenly dawned on me that I have no idea what the difference between single case designs and case study designs. So this week I have decided to define the difference between the two methods of research.

A case study design is the use of qualitative methods to do an in-depth study of an individual or a small group of people.  The aim is to gain a detailed description of an individual and their behavior with no attempts to explain the cause of behaviour. For example, the case of Eve by Thigpen and Cleckley, a woman who suffered from the multi personality disorder and later revealed her self as Christine Sizemore. The case, that shed light on the nature of the then rare disorder.

A single case design (also known as a single subject design) is also an in-depth investigation of a single individual. However, as well as gaining a detailed description of that individual using experimental research strategies the design allows you to see if a cause and effect relationship exists between variables.  It involves manipulating conditions in the study, that is the treatment or intervention to be administered, and observations are made before intervention (baseline phase, A) and during the administration of an intervention (treatment phase, B). The most common type of single case design is an ABAB also known as the reversal design, which involves a baseline phase-treatment phase-baseline phase-treatment phase. For example a study done on the effects of teacher attention on study behaviour done by Hall et al in 2003, used the ABAB design and found a casual relationship between teachers attention and desirable classroom behaviours.

So the main difference between the two is that a single case design lets you establish a cause and effect relationship whereas a case study design only gives a description of an individual and their behaviour.

Gravetter, F. J., & Forzano, L. B. (2003). Research methods for the behavioral sciences . Belmont , CA : Wadsworth/Thomson Learning.