Education, study and knowledge

Publication bias in psychology: what it is and why it causes problems

Psychology, specifically its research side, has been in crisis for a few years, which does not help its credibility at all. The problem is not only found in the problems when replicating classic experiments, but also when publishing new articles.

The big problem is that there seems to be a prominent publication bias in psychology., that is to say, it seems that the publication of articles is based more on aspects such as how interesting they can be appear to the general public more than the results and scientifically relevant information that they offer to the world.

Today we are going to try to understand how serious the problem is, what it implies, how this conclusion has been reached. And if it is something exclusive to the behavioral sciences or are there others that are also in the same crossroads.

  • Related article: "Cognitive biases: discovering an interesting psychological effect"

What is publication bias in psychology?

In recent years, various psychology researchers have warned about the lack of replication studies within the field, which has suggested the possibility that there may have been

instagram story viewer
a publication bias in the behavioral sciences. Although this was something that was coming, it was not until the end of the 2000s and the beginning of the following decade that there was evidence that the Psychological research had problems, which could mean the loss of valuable information for the advancement of this great, albeit precarious, science.

One of the first suspicions of the problem was what happened with Daryl Bem's 2011 experiment. The experiment itself was simple:

It consisted of a sample made up of volunteers who were shown 48 words. They were then asked to write down as many words as they could remember. Once this was done, they had a practice session, in which they were given a subset of those 48 previously displayed words and asked to write them down. The initial hypothesis was that some participants would remember better those words that, later, they were made to practice.

After the publication of this work, three other research teams, separately, tried to replicate the results seen in Bem's work. Although, in essence, they followed the same procedure as the original work, they did not obtain similar results. This, despite the fact that it would allow some conclusions to be drawn, was reason enough for the three research groups to have serious problems in getting their results published.

In the first place, since it is a replica of a previous work, it gave the feeling that scientific journals were interested in something new, original, not a "mere copy" of something previous. Added to this was the fact that the results of these three new experiments, while not being positive, were seen more as studies that were methodologically poorly done and that this would explain the obtaining of bad results rather than thinking that, perhaps, the new data represented a new advance for the science.

In psychology, the studies that confirm their hypotheses and, therefore, obtain more or less clear positive results, seem to end up behaving like rumors. They are easily disseminated by the community, sometimes without even consulting the original source where they came from or without reflect carefully on the conclusions and discussions made by the author himself or by critics of that author job.

When attempts to replicate previous studies that had positive results fail, these replicates are systematically unpublished.. This means that, despite having carried out an experiment that confirms that a classic one was not replicable for whatever reason or motive, as it is not of interest to journals, the authors themselves avoid publishing it, and in this way it is not recorded in the literature. This causes what is technically a myth to continue to spread as scientific fact.

On the other hand, there are the habits ingrained by the research community, ways of proceeding that are quite open to criticism even though they are so generalized that it becomes much a blind eye: modify experimental designs in such a way as to ensure positive results, decide sample size after checking whether results results are significant, select previous studies that confirm the hypothesis of the current study, omitting or ignoring, like someone who does not want the thing, those that refute.

Despite the fact that the behaviors that we have just exposed are criticizable but, within what is possible, understandable (although not necessarily tolerable), there are cases of manipulation of study data to ensure that it ends up being published that there can be open talk of fraud and a complete lack of scruples and ethics professional.

One of the most savagely embarrassing cases in the history of psychology is the case of Diederik Stapel, whose fraud is considered to be of biblical proportions: he came to invent all the data of some of his experiments, that is, speaking clearly, like someone who writes a fiction novel, this gentleman invented research.

This not only supposes a lack of scruples and a scientific ethic that is conspicuous by its absence, but also a total lack of empathy. towards those who used his data in subsequent research, making those studies have to a greater or lesser extent a component fictional.

Studies that have highlighted this bias

Kühberger, Fritz and Scherndl analyzed in 2014 nearly 1,000 articles published in psychology since 2007, randomly selected. The analysis overwhelmingly revealed glaring publication bias in the field of behavioral science.

According to these researchers, theoretically, the size of the effect and the number of people participating in the studies should be independent. each other, however, their analysis revealed that there is a strong negative correlation between these two variables based on studies selected. This means that studies with smaller samples have larger effect sizes than studies with larger samples.

In the same analysis, it was also shown that the number of published studies with positive results was greater than the studies with negative results, being the ratio approximately 3:1. This indicates that it is the statistical significance of the results that determines whether the study will be published rather than whether it really brings some kind of benefit to science.

But apparently it is not only psychology that suffers from this type of bias towards positive results. In fact, It could be said that this is a widespread phenomenon in all sciences., although psychology and psychiatry would be the most likely to report positive results, leaving aside studies with negative or moderate results. These data have been observed through a review carried out by the sociologist Daniele Fanelli from the University of Edinburgh. He reviewed nearly 4,600 studies and found that between 1990 and 2007, the proportion of positive results rose by more than 22%.

  • You may be interested in: "History of Psychology: authors and main theories"

How bad is a replica?

There is an erroneous belief that a negative reply invalidates the original result. The fact that an investigation has carried out the same experimental procedure with different results does not mean that neither the new investigation is poorly done methodologically nor that the results of the original work have been exaggerated. There are many reasons and factors that can cause the results to not be the same, and all of them allow us to have a better knowledge of reality, which, after all, is the objective of any science.

The new replicas should not be seen as a harsh criticism of the original works, nor as a simple "copy and paste" of an original work, only with a different sample. It is thanks to these replicas that a greater understanding of a previously investigated phenomenon is given, and allows finding conditions in which the phenomenon is not replicated or does not occur in the same way. When the factors that condition the appearance or not of the phenomenon are understood, better theories can be elaborated.

Prevent publication bias

Solving the situation in which psychology and science in general find themselves is difficult, but this does not necessarily mean that the bias has to worsen or become chronic. so that it can be shared with the scientific community, all useful data implies the effort of all researchers and a greater tolerance on the part of journals towards studies with negative results, some authors have proposed a series of measures that could contribute to ending the situation.

  • Elimination of hypothesis tests.
  • More positive attitude to non-significant results.
  • Improved peer review and publication.

Bibliographic references:

  • Kühberger A., ​​Fritz A., Scherndl T. (2014) Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size. PLoS One. 5;9(9):e105825. doi: 10.1371/journal.pone.0105825
  • Blanco, F., Perales, J.C., & Vadillo, M.A. (2017). Can psychology rescue itself mateixa? Incentius, bias and replicability. Psychology Yearbook of the Valencian Psychology Society, 18 (2), 231-252. http://roderic.uv.es/handle/10550/21652 DOI: 10.7203/anuari.psicologia.18.2.231
  • Fanelli D. (2010). Do pressures to publish increase scientists' bias? An empirical support from US States Data. PloS one, 5(4), e10271. doi: 10.1371/journal.pone.0010271NLM

Empty Nest Syndrome: when loneliness takes over the home

The Empty Nest Syndrome It is a disorder associated with the loss process suffered by parents whe...

Read more

6 ways to live in the present and enjoy the moment

Practicing a mindful and fulfilling life begins with living in the present moment. Therapies such...

Read more

How to love yourself? 6 tips for self-love

How to love yourself? 6 tips for self-love

Happiness and true love begin with self-love, or at least that's what we hear around us. What hap...

Read more

instagram viewer