From the EditorsFree Access

Advantages of Starting with Theory

    Published Online:

    The field of management is in a period of critical self-reflection about several issues, including the prevalence and potentially pernicious consequences of presenting results of post-result theorizing or “HARKing” (hypothesizing after the results are known; Kerr, 1998) within the realm of deductive hypothesis-driven quantitative research. As the common story goes, a researcher collects or obtains a dataset with only a very general research question in mind, or perhaps none. Once a dataset including many measures is obtained, he or she scours a correlation matrix for unanticipated significant associations, focusing on those that may deviate from conventional wisdom or the body of empirical findings in the literature. Alternatively, the researcher runs dozens of models looking for signs of moderation, mediation, or both. Once a set of “novel” and “significant” findings have been identified, the process of story building begins. The researcher searches for applicable theory, disregarding those too far afield from the measurements in the model and selecting one or more for use in story crafting. With the adopted logical framing now in use, the findings are “predicted” under the guise of the “hypothetico-deductive” approach (Hempel, 1966) as though the author had theorized first and analyzed later.

    This approach in the realm of quantitative deductive research is certainly prevalent. About a third of psychology authors surveyed in John, Loewenstein, and Prelec (2012) admitted to this practice, and, although such data from management researchers are not available to my knowledge, my experience is that the practice seems common in our community as well. History has produced many interpretations of the practice, ranging from the benign (e.g., a psychological, but otherwise inconsequential, distinction compared to predictions developed a priori) to the pernicious (see Hitchcock & Sober, 2004, for a review).

    In management and related disciplines, journals are replete with editorials outlining ethical issues arising from post-result theorizing (e.g., Hollenbeck & Wright, 2017; Leung, 2011), papers containing quantitative evidence of bias resulting from the practice (e.g., Bosco, Aguinis, Field, Pierce, & Dalton, 2016; O’Boyle, Banks, & Gonzalez-Mule, 2017), and other papers offering empirical solutions (e.g., Simonsohn, Nelson, & Simmons, 2014). In this editorial, I take a different approach and discuss the likely outcomes of post-result theorizing from the perspective of the review process. My focus is on work submitted as though it was conducted with a hypothesis-driven deductive approach and not on inductive theory building in case-based and other qualitative approaches.

    Perhaps the most frequent question one receives as editor-in-chief of Academy of Management Journal (AMJ) is some variation of this: “What is your best advice for publishing in the Journal?” There is, of course, no clear or foolproof answer to this question. AMJ is a big-tent journal, receiving and accepting manuscripts from across the spectrum of management. Potential paths to success are likely just as numerous as the types of papers that are ultimately accepted in the Journal. Even within topic or research design domains, the process is complex; any offered advice comes with the caveat that there is no magic elixir.

    But, there seems to be an assumption in the literature, as is evident in many of the editorials that appear on the topic, that post-result theorizing is widely used because it is believed to be an effective approach for publishing in high-quality outlets. For example, Starbuck (2016: 171) referred to it as a “success-facilitating practice.” Certainly, papers whose authors have taken this approach have made it through the review process, which has, in turn, created interpretation issues and bias in the literature. Some authors appear to be rather skilled at this type of approach. For the rest of us, however, I would characterize HARKing not as a “success-facilitating” but a “rejection-creating” practice. It would be difficult to quantify my opinions in the absence of a large prospective study that assessed authors’ approaches to conducting the research and the outcomes of the review process. My judgment is based on my past and current editorial experience at AMJ, as well as other reviewing experiences and observations (e.g., via friendly reviews). My experience-based conclusion is that post-result theorizing is generally ineffective in terms of producing high-quality papers. Instead, results-first quantitative papers leave telltale signs that create rejection-leaning commentary from reviewers. Putting aside concerns about ethicality and illegitimacy, HARKing should generally be avoided if someone is interested in publishing their quantitative work in top journals. A more effective adage and approach would be to “start with theory.” Below, I outline four telltale signs of post-result theorizing that lead to rejection-creating commentary in the review process. I then outline the advantages of “starting with theory” in the realm of hypothetico-deductive research. Finally, I offer some concluding thoughts and consider situations in which post-result theorizing might have a place at AMJ.


    Contorted Theory

    For papers submitted to AMJ, the extent of the theoretical contribution is a key point for decision-making. Perhaps the clearest and most common issue with papers submitted after post-result theorizing is contorted theory. There appear to be two common signals that HARKing has occurred. One is that the theory is, in and of itself, unfit for undergirding the predictions that are being made. Authors will frequently evoke a certain theoretical perspective, typically a broad perspective or view, but fail to use the logic, assumptions, and central tenets of the theory to drive the narrative for the predictions. As a colleague once joked, “We need a theory, and we need it fast!” A related contortionism issue is when authors evoke some number of different perspectives to justify all the predictions in the model. Kerr (1998) referred to this as the “too-convenient qualifier” or a prediction that arises from out of the blue, or is not tied to the main framing of the study, but otherwise receives empirical support. Indeed, arbitrary moderators may be among the most frequently cited concern among reviewers. To be fair, a paper does not necessarily have to evoke a single, unified framework as a guide for all predictions. Schaubroeck (2013) argued convincingly that a sole devotion to overarching theoretical frameworks can serve to stifle authors’ own creative theoretical ideas. At AMJ, we certainly encourage authors to develop their own novel and interesting theory and framework. But, when a third or fourth theory is evoked to justify yet another moderator or mediator prediction, a flag is raised. As a rule, reviewers do not respond favorably to either of these contorted theory types.

    Poorly Defined Constructs

    The process of retrofitting results to a theory can result in conceptual sloppiness and inattention to details regarding construct definition. It is difficult to say whether constitutive definition problems emanate from the authors’ poor or surface-level knowledge of the underlying theory or are driven by available measurements or adjustments made during the search for significant findings. In either case, the result is the same. These issues are highlighted by reviewers with strong knowledge of the theory, phenomenon, or topic area. Papers are frequently submitted with loosely defined conceptual variables, but a rather impressive set of “supported” findings.

    Construct–Measurement Mismatch

    A related problem that arises from post-result theorizing is a misfit between the constructs in the theoretical model and their operationalizations in the execution of the study. In the search for a good-fitting empirical model, variables are often added and deleted or modified along the way. Later, they are retrofitted within a “theory” and presented but the conceptualizations found in the theory are a poor fit with what was ultimately tested. This slippage is easily recognized by reviewers who are experts in the theoretical framing used to justify the predictions, raising concerns that are difficult for the authors to address effectively in a revision. This sets the process on a path that often results in an unfavorable decision for the authors.

    Theory–Design Mismatch

    In other cases, the measurements available to the authors do not include the mechanisms suggested by the evoked theoretical framework, which raises suspicions among the reviewers that the authors transitioned to a model that “worked” rather than the one most logically suggested by a theory. For experimental designs, Kerr (1998: 199) stated reviewers are often surprised “at the nonoptimal way in which an experiment treatment or measure was operationalized, the absence of an obviously informative control condition, or the author’s failure to measure a variable central to the purported mediating process.” For field study designs, I have observed that the explanatory variables are often only loosely connected to the theoretical mechanisms or that the study was not designed with key features necessary for providing support for, or refutation of, the theory. In such cases, the Discussion section also often falls short, as the authors tend to restate the findings of the paper rather than address the underlying theoretical implications of the research or how the study challenges, changes, or advances what we know at a conceptual level.


    I believe there are significant advantages to an authentic, a priori theory development and testing approach that, in general, serves authors in the hypthetico-deductive genre better than a post-result theorizing approach. I hope to make a broader point that the advantages for authors in the review process go beyond those associated with simply reducing the warning signs listed above. Clearly, a theory- rather than results-driven approach should generate a less contorted, more coherent set of predictions that emanate from or build upon the underlying perspective. Moreover, a “start with theory” approach should allow authors to offer more refined, accurate, and comprehensive definitions of their constructs. Ideally, then, measurements for these constructs would be aligned with the constitutive definitions and the study would be designed with features necessary for testing the underlying theoretical mechanisms. These are the simple execution-based pieces of advice that should, on balance, improve reactions to the paper in the review process.

    Furthermore, there are two other, perhaps subtler, advantages to this approach. First, taking a strong theoretical frame at the beginning of the study should help authors identify and articulate where their key theoretical contribution lies. At AMJ, we encourage authors to produce novel, interesting, and theoretically bold work. It may sound counterintuitive to suggest that starting with a solid theoretical framework in mind is a key for producing such novel insights. Identifying the uniqueness and novelty of a given approach is difficult in the absence of a solid understanding of what is already known or assumed to be true in the literature. As a way of simplifying the AMJ mission, my editorial team often relies on this question: “How does this paper challenge, change, or advance what we know at a theoretical level?” From an author’s perspective, this question can be answered more effectively when there is a clear understanding of the existing, relevant theoretical perspectives. Building a strong theoretical framework can help researchers identify what aspects of current theories are well understood, which aspects have yielded conflicting findings, and, importantly for AMJ, where the authors can build, extend, and offer bold alternative thinking.

    A second subtle advantage should come in the form of an improved Discussion section. I find that reviewers are often surprised that authors do not address implications for theory specifically in the Discussion. If the study did not originate with a clear theoretical view, it is certainly more difficult to offer some thoughtful reflection on what the study has contributed to the literature on the theory dimension. My own judgment is that the authors do not know or understand exactly what these contributions are (and often they do not exist). Instead, the theory implications section of the Discussion is filled with broad statements of contributions to a topic area or specific restatements of the findings of the current study. An effective Discussion not only revisits the theoretical underpinning of the study, but articulates in a rich “fashion how the study changes, challenges, or otherwise fundamentally refines understanding of extant theory (and/or its core concepts, principles, etc.)” (Geletkanycz & Tepper, 2012: 257). It is simply easier to make this case when one has a clear theoretical foundation from the beginning.


    The answer to this question is sometimes “yes.” The process of doing research is dynamic; many decisions must be made along the way, difficult choices must be considered, and papers evolve through various drafts and later in the review process. In analyzing data thoroughly, certain discoveries or unexpected findings occur, and sometimes these can be very informative and spur additional fruitful research directions in the literature. I can envision at least two ways these findings can be used effectively in papers for AMJ. First, these findings and the abductive reasoning that follows can serve as a starting point for further specific deductive theory development and follow-up quantitative studies. In the same way that mixed methods studies often begin with an inductive qualitative approach (e.g., see Sonenshein, DeCelles, & Dutton, 2014), quantitative anomalies or surprises in one study can be a spark for more comprehensive theory building and further testing in follow-up studies. Certainly, this type of multistudy progression that builds from a quantitative surprise through some future deductive theorizing and testing would be an appropriate application of the post-result theorizing approach.

    A second reasonable use of post-result arguments and interpretations would be what Hollenbeck and Wright (2017) referred to as “tharking” or “transparently” discussing alternative results discovered in exploratory analyses. They suggested that authors include a section of additional findings in the Discussion with some fleshed out, yet preliminary, interpretations. Keeping in mind caveats about alpha inflation, authors’ openness about these additional findings can serve to enrich the paper. As a practical matter, these types of analyses are frequently conducted in the review process anyway, and sometimes appear in sections on robustness checks or behind the scenes in responses documents that only reviewers evaluate. If authors believe such discussions can reasonably augment their paper and perhaps spur future theory innovations by other authors, they can be welcome additions to AMJ submissions.


    Post-result theorizing or HARKing seems to remain a popular choice for authors of quantitative papers. Editorials sounding bells of undesirability and unethicality and papers presenting evidence of its biasing influence on the literature do not appear to have stemmed the flow of papers using this approach. To take the issue in a different direction, I have attempted to present a case that the practice often leaves a trail of problematic signals that are uncovered in the review process and frequently lead to rejection. If authors prefer that their hypothesis-driven deductive research receives more favorable reactions in the review process, I encourage them to consider carefully the advantages of starting with theory as an alternative to a results-driven retrospective theorizing approach. My judgment is that it will place them in better stead. Strong conceptual framing, proper study execution, conceptual clarity, construct–measure matching, and design features that allow proper testing of underlying theory will reduce the rejection-creating commentary that plagues HARKed submissions. To reinforce this point further, I will also reiterate the commitment of my editorial team to make decisions on manuscripts based on the originality, novelty, and extent of theoretical contribution as well as on the quality and execution of the research methods, rather than on the pattern of significance that appears in the results (see Shaw, 2017). To the extent that we, as a community of scholars, can take a constructive stance and collectively encourage one another to capitalize on the advantages outlined here for deductive quantitative research, the quality of theorizing should improve and the biases evident in our current literature base should be minimized.

    Thanks to Markus Baer, Katy DeCelles, Jessica Rodell, and Lisa Leslie for insight and input on drafts of this editorial.


    • Bosco F. A., Aguinis H., Field J. G., Pierce C. A., Dalton D. R. 2016. HARKing’s threat to organizational research: Evidence from primary and meta-analytic sources. Personnel Psychology, 69: 709–750. Google Scholar
    • Geletkanycz M., Tepper B. J. 2012. Publishing in AMJ—Part 6: Discussing the implications. Academy of Management Journal, 55: 256–260.LinkGoogle Scholar
    • Hempel C. 1966. Philosophy of natural science. Upper Saddle River, NJ: Prentice-Hall. Google Scholar
    • Hitchcock C., Sober E. 2004. Prediction versus accommodation and the risk of overfitting. The British Journal for the Philosophy of Science, 55: 1–34. Google Scholar
    • Hollenbeck J. R., Wright P. M. 2017. Harking, sharking, and tharking: Making the case for post hoc analysis of scientific data. Journal of Management, 43: 5–18. Google Scholar
    • John L. K., Loewenstein G., Prelec D. 2012. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23: 524–532. Google Scholar
    • Kerr N. L. 1998. HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2: 196–217. Google Scholar
    • Leung K. 2011. Presenting post hoc hypotheses as a priori: Ethical and theoretical issues. Management and Organization Review, 7: 471–479. Google Scholar
    • O’Boyle E. H., Banks G. C., Gonzalez-Mule E. 2017. The chrysalis effect: How ugly initial results metamorphosize into beautiful articles. Journal of Management, 43: 376–399. Google Scholar
    • Schaubroeck J. M. 2013. Pitfalls of appropriating prestigious theories to frame conceptual arguments. Organizational Psychology Review, 3: 86–97. Google Scholar
    • Shaw J. D. 2017. Moving forward at AMJ. Academy of Management Journal, 60: 1–5.LinkGoogle Scholar
    • Simonsohn U., Nelson L. D., Simmons J. P. 2014. P-curve: A key to the file drawer. Journal of Experimental Psychology: General, 143: 534–547. Google Scholar
    • Sonenshein S., DeCelles K. A., Dutton J. E. 2014. It’s not easy being green: The role of self-evaluations in explaining support of environmental issues. Academy of Management Journal, 57: 7–37.LinkGoogle Scholar
    • Starbuck W. H. 2016. How journals could improve research practices in social science. Administrative Science Quarterly, 61: 165–183. Google Scholar