Why hasn't Management Embraced the Open Science Movement?

To explain this resistance, we first need to travel back in time and understand where this Open Science movement comes from…

The public might not know about it, but serious doubts have been raised in the past decade about the quality and replicability of social sciences [1–5]. As a consequence, a growing number of researchers now consider the findings in social sciences to be “incredible”, as opposed to “credible” [6].

This increased skepticism started in the field of psychology a decade ago. Indeed, the year 2011 was a turning point. Through some weird stars alignment [7,8], Daryl Bem published a paper demonstrating the impossible result that humans are capable of pre-cognition (i.e., guessing the future)[9]; Diederik Stapel was revealed as a fraudster, ending a prolific career and leading to 57 retractions; and Joe Simmons, Leif Nelson and Uri Simonsohn published an influential paper showing that common research practices allow researchers to provide evidence for any finding, no matter how ridiculous [3].

This series of events revealed that the field of psychology was more sandcastle than fortress and led to a major epistemic crisis.

P-hacking

At the core of this crisis was a set of research practices called “p-hacking”: Conducting many analyses on the same data set until the desired pattern of results emerge. At the time, p-hacking was seen as a form of “detective work”, a way to reveal the truth that lurks beneath the data. However, it turns out that p-hacking allows researchers to always find support for their hypotheses, even when these hypotheses are wrong.

As Dennis Tourish wrote 1, p-hacking “is a form of “data torture”. The data are interrogated mercilessly until they confess they support a given hypothesis.”

Discovering the damages of p-hacking had two implications for social sciences. First, it became clear that many of the effects that we had “discovered” should no longer be trusted. Second, it suggested that the peer review process alone is not a guarantee that the findings are true.

The Open Science movement

From there, some researchers concerned about the trustworthiness of their field started racking their brains to find solutions and offered tools that would allow psychology to reclaim its credibility. In particular, they came up with three principles: pre-registration, open materials, open data.

The goal of pre-registration is to limit flexibility in data collection and data analysis. When pre-registering their work, researchers specify in advance (i.e., before running their study) the analysis they plan to do. This prevents them from going on fishing expeditions (i.e., running many unplanned analysis based on illusory patterns in the data), and reduce the chances of “false-positives” (concluding that there is an effect while there is none).

The goal of open materials and open data is to increase transparency. When researchers share their materials (e.g., the manipulations used to influence people’s decisions, the variables used to measure those decisions…), their analysis (i.e., the steps describing how they produced the results from their raw data), and the raw data itself, they allow others to reproduce the steps of their work, detect problems, and make the scientific community aware of any issue in a timely manner.

The benefits of Open Science: It makes the field actually scientific!

Recent studies suggest that these efforts are working. For instance, while 96% of non-pre-registered studies find evidence for their hypothesis, only 44% of pre-registered studies do so [10]. If it may sound like a bad thing at first glance, it simply means that pre-registered studies are more honest: When the data do not support the hypothesis, researchers cannot massage them until they appear to. In other words, when scientists pre-register their studies, they have less undisclosed flexibility in how they collect and analyze their data, which in turn decreases the chance that they will chase noise in their data and report false positives.

In addition, this open-science revolution led to an unprecedented effort to “clean-up” the field. Researchers in psychology ran hundreds of large-scale replications of past effects to know which effects hold and which ones should be scrapped. The results are staggering: Among 100 studies in psychology published in the top three psychological journals, only 36% replicate [4]. If this effort gave a quite gloomy picture of the field, it also paved the way for better science: Researchers could now rely on those replications to determine which paths were worth investigating, and which should be abandoned.

Finally, a recent example reminds us of the importance of open data. In 2012, a team of prominent researchers published a paper showing that signing at the top (vs. at the bottom) of a form could prompt people to be more honest in their reporting [11]. The paper attracted a lot of citations and media coverage, particularly when one of the authors claimed that this “simple trick” might help the IRS (American tax office) to recover up to $345 billions in taxes [12]. In 2020, the data of the paper was finally made public, and a team of researchers quickly discovered that it was… fabricated [13]! If the journal had required the data to be public when the paper was first published, the fraud might have been discovered much earlier, and a countless waste of resources from both academics and organizations would have been avoided.

Progress is unequally distributed

In psychology, the awareness that common research practices were far from scientific standards has eventually led to large reforms, and there is considerable agreement that, while much remains to be done, the field has improved by leaps and bounds [14].

However, this awareness has not spread evenly across fields. In particular, the field of management seems immunized against those debates, while at the same time still infected by bad research practices. During my PhD (2015 - 2020), discussions of “p-hacking”, “replication”, “open science”, or “scientific fraud” were fringe, and when I applied the principles of open science in my own research (e.g., by pre-registering my studies), the main reaction from my peers and advisors was resistance and incomprehension.

The result of such denial is a field that looks like more and more a besieged castle, in which people are disconnected from the most recent scientific debates, rely on outdated knowledge and practices, and continue to unflappably publish doubtful effects.

What’s wrong with Management?

What explains this resistance? Why can’t Management reforms itself in the same way that psychology did? Here are a few reasons that, in my view, might explain it.

Access to data is problematic

In management, and in social sciences in general, the effects we study are subtle, multiply determined, and therefore small. The problem with small effects is that they require large (and sometimes even very large) sample sizes to be captured.

However, access to data is problematic in management, and even more so when large samples are required. For example, researchers scarcely have access to all the employees of a company. Many research questions also require having access to specific types of employees (e.g., managers and their subordinates; people working in teams; minority group members), which makes access to large samples even more difficult.

This constraint might have a direct impact on the likelihood that management scholars pre-register their work. If they had a hard time accessing data, they might be particularly reluctant to tie their hands with a pre-registration that will ultimately prevent them from “massaging” the data until it confesses a significant result.

Replications are almost impossible

In management, replications of many studies would probably be much harder to run than in psychology. Not only because, as previously said, access to data can be problematic, but also because many of those effects are context-specific (even if they are rarely, if ever, presented as such): They were obtained from a certain company, or a specify industry, or with specific participants. From there, any failure to replicate a past effect might be discarded as uninformative: “it failed because the context was different”.

This difficulty to replicate past effects raises multiple questions on the field of management as a science: If most effects are context-specific and as such difficult to replicate, then are those effects falsifiable? And if they are not falsifiable, how can we build incremental knowledge about human behaviors in organization?

An accountability problem

Contrary to many other fields who have clear identified stakeholders (e.g., patients in medicine, taxpayers in economics or psychology…), the field of management has no clear stakeholder.

One might argue that the stakeholders of management research are business practitioners. However, a stakeholder must not be understood only as someone who applies the findings of the field, but also as someone who can hold researchers accountable for what they produce. From this perspective, business practitioners are powerless: They cannot hold management scholars accountable for an intervention they implemented and that failed to produce results.

Without stakeholder, producing scientific evidence becomes a game: Something that exists for its own sake, disconnected from reality, that has lost sight of its “raison d’être” (i.e., advancing knowledge, finding true and replicable effects, understanding the world) [15]. If there is little to no real-world accountability for what a researcher is producing, researchers become driven by what matters “for them”: Publishing, securing grants, getting tenure. In this game, the open-science movement is at best irrelevant, and at worse an impediment.

A field dominated by an entrepreneurial mindset

Unlike in many disciplines, researchers in management do not work in labs. In some cases, they will work with one to three other colleagues on a paper, but most of their work is relatively entrepreneurial: They are expected to take the lead of their career by multiplying projects and publishing as much as possible.

This entrepreneurial orientation favors competition, individualism, and a search for fame and hype. It also leaves little room for “meta-scientific” considerations: Why, how and for whom are we producing this knowledge? Is this knowledge sound enough from a scientific perspective? And if not, how can we improve our research practices? During my PhD, I was told that those questions were irrelevant, “philosophical” considerations. I could not disagree more: It is precisely because we have little consideration for those questions that we are struggling to improve our research practices.

To conclude…

One of the events that spurred my interest in research was the 2008 financial crisis. As a matter of fact, I am now finding similarities between the management scholars of today and the traders of pre-2008. The traders viewed regulators as rigid bureaucrats that were preventing them from making money. I fear that many management scholars perceive open-science reformists in a similar fashion: Killjoys that are preventing them from doing whatever benefits them. History was cruel to the traders.


To go further…


REFERENCES

  1. Honig B, Lampel J, Baum JAC, Glynn MA, Jing R, Lounsbury M, et al. Reflections on Scientific Misconduct in Management: Unfortunate Incidents or a Normative Crisis? Acad Manag Perspect. 2018; 32(4):412–42.
  2. Ioannidis JPA. Why Most Published Research Findings Are False. PLOS Med. 2005; 2(8):124.
  3. Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011; 22(11):1359–66.
  4. The Open Science Collaboration. Estimating the reproducibility of psychological science. Science. 2015; 349(6251).
  5. Tourish D. The triumph of nonsense in management studies. Acad Manag Learn Educ. 2020; 19(1):99–109.
  6. Vazire S. The credibility revolution. Society for Personality and Social Psychology; 2020; New Orleans, USA.
  7. Engber D. Daryl Bem Proved ESP Is Real. Which Means Science Is Broken. Slate. 2017.
  8. Syed M. The open science movement is for all of us. Department of Psychology, Western Washington University; 2019.
  9. Bem DJ. Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. J Pers Soc Psychol. 2011; 100(3):407.
  10. Scheel AM, Schijen M, Lakens D. An excess of positive results: Comparing the standard Psychology literature with Registered Reports.
  11. Shu LL, Mazar N, Gino F, Ariely D, Bazerman MH. Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end. Proc Natl Acad Sci. 2012; 109(38):15197–200.
  12. Gino F. One Weird Trick to Save $345 Billion. Harvard Business Review. 2013
  13. Simonsohn U, Simmons JP, Nelson LD. [98] Evidence of Fraud in an Influential Field Experiment About Dishonesty. Data Colada. 2021
  14. Nelson LD, Simmons J, Simonsohn U. Psychology’s Renaissance. Annu Rev Psychol. 2018; 69(1):511–34. 15. DeDeo S. When Science is a Game. 2020.

  1. Tourish D. Management Studies in Crisis: Fraud, Deception and Meaningless Research. Cambridge University Press; 2019; p. 87. ↩︎

Found this post insightful? Get email alerts for new posts by subscribing:

Zoé Ziani
Zoé Ziani

PhD in Organizational Behavior