Social Valuation Across Multiple Audiences: The Interplay of Ability and Identity Judgments
Abstract
How is an evaluating audience influenced by previous evaluations made by another audience? This question is critical to individuals and organizations reaching out to multiple audiences for key resources. While extant work has suggested evaluators are influenced by previous evaluations made by their peers, we develop theory about how evaluators’ assessment of a candidate is shaped by previous evaluations made by an external (nonpeer) audience. We argue that the latter represent exogenous indices that affect evaluators in two opposing ways: they positively influence peer valuation by pointing to candidates’ unobservable abilities, yet, since they are conferred by an external audience, they are also indicative of candidates’ deviation from an expected peer identity. The combination of the two opposite effects suggests an inverted U-shaped relationship between exogenous indices and peer valuation. Further, this effect is moderated by the identity proximity between audiences, and the availability of previous peer evaluations (endogenous indices). We test and find support for our arguments using unique data on the peer valuation of 9,502 academic scientists applying for research grants at a research university. Our work contributes to the understanding of valuation and socially endogenous inferences, and has implications for the management of organizations in multi-audience environments.
Organizations depend on the judgment of a variety of stakeholders and constituencies— audiences—that control the critical resources they need to grow and survive (Pfeffer & Salancik, 1978; Rindova & Fombrun, 1999). Audiences exert their scrutiny by forming a valuation of candidates with whom they may engage (Giorgi & Weber, 2015); for instance, investors assess the ability of a company to generate future cash flows before providing financial capital, and customers evaluate organizations from whom they consider buying products or services (Lanzolla & Frankort, 2016; Orlikowski & Scott, 2013; Zuckerman, 2012). In organizational contexts, individuals are regularly evaluated for hiring, promotion, or awards (Ertug & Castellucci, 2015; Giorgi & Weber, 2015; Lamont, 2012). Organizational members also reach out to various audiences for key resources, submitting themselves to the evaluation of external judges deploying diverse “yardsticks” for assessing candidates (Karpik, 2010; Patriotta, Gond, & Schultz, 2011).
Existing research has highlighted the intrinsically social nature of valuation (Lamont, 2012; Zuckerman, 2012). Because the worth of candidates is often not directly observable, evaluators turn to available indices from evaluations previously made by others to complete their assessment, a process known as socially endogenous inferences (Salganik, Dodds, & Watts, 2006; Zhang, 2010; Zuckerman, 2012). Evaluators tend to be partial to candidates who have already received positive evaluations, resulting in herding effects that provide disproportionate advantages to already highly appreciated candidates (Merton, 1968).
While this powerful mechanism is well known and documented for homogenous audiences, we know relatively little about how it plays out when multiple audiences are present. Audiences are delineated by social and symbolic boundaries (Lamont & Molnar, 2002), such that it is unclear as to whether and how a focal audience will be influenced by observable assessments of another audience. In this research, we focus on the common case of candidates evaluated by an audience of peers (Cattani, Ferriani, & Allison, 2014; Shymko & Roulet, 2017)—that is, a set of like-minded actors on whom candidates depend for recognition or resources—and explore the following question: How is the peer evaluation of a candidate influenced by the earlier evaluations made by external, nonpeer audiences?
We address this question by developing a theory that distinguishes between endogenous indices (previous evaluations by peers) and exogenous indices (previous evaluations provided by an external, nonpeer audience). We argue that such indices generally provide two types of information to evaluators: indices of ability (the higher somebody’s past evaluations, the higher their imputed ability), and indices of identity conformance (the higher somebody’s past evaluations, the more they conform to the identity expected by the evaluating audience). In the case of endogenous indices, both dimensions are closely aligned and therefore indistinguishable. However, in the case of exogenous indices, both dimensions diverge; this has implications for how peer evaluators are influenced by the evaluations provided by external audiences.
We develop this argument by considering academic scientists entering peer valuation contests for research grants where peer evaluators have information on evaluations previously performed by an external audience of knowledge users (“industry”), in the form of industry contracts awarded to a focal scientist. These external audience evaluations represent exogenous indices that will influence peers in two ways: first, industry contracts convey positive information about the unobservable ability of the candidate; second, the accumulation of industry contracts may raise concerns about the conformity of the candidate with the expected identity template of an academic scientist, thereby casting doubts over their allegiance to the ethos of academic science. Considering these two counter-balancing effects leads us to hypothesize an inverted U-shaped relationship between exogenous indices and peer valuation. We further predict that this curvilinear relationship is moderated by the identity proximity between the academic and the industry audiences, and by the availability of endogenous indices though an academic’s publishing trajectory.
We test and find support for these arguments using panel data on 9,502 scientists employed by a globally leading research university between 2001 and 2012: all else being equal, scientists’ chances of receiving peer-reviewed grants are highest at moderate levels of industry evaluation, beyond which they recede. The inverted U-shaped relationship is attenuated in disciplines with a proximate identity to industry, such as medicine and engineering, and when scientists have established a strong and regular publication record. These findings are robust to changes in specification, controlling for self-selection and endogeneity. We corroborate our results using semi-structured interviews with scientists involved in reviewing grant proposals.
This study contributes to the literature on social valuation and audience appreciation in an organizational setting (Giorgi & Weber, 2015; Lanzolla & Frankort, 2016; Orlikowski & Scott, 2013; Salganik & Watts, 2008; Tucker & Zhang, 2011; Zuckerman, 2012). Complementing prior work on socially endogenous inferences, we offer a more nuanced account of how past appreciations affect valuation. We propose the novel idea that prior indices of evaluation expressed by external audiences may, under certain conditions, have a negative effect on peer valuation by revealing a potential identity deviance—offsetting the well-documented “Matthew effect” in science (Merton, 1968). By revealing the audience-specific nature of past appreciation indices for social valuation, we also contribute to work exploring the consequences for actors being exposed to multiple or heterogeneous audiences (Cattani et al., 2014; Ertug, Yogev, Lee, & Hedström, 2016; Kim & Jensen, 2014; Pontikes, 2012): our findings highlight the potential downside of reaching out to an external audience for resources.
Overall, this research has implications for individuals and organizations diversifying from their core or peer audiences toward novel, noncore audiences. For universities, our findings underline the benefits for scientists to target other audiences beyond their peers for research resources; yet, the findings also indicate the possible risks of doing so, particularly for those in disciplines distant from industry and with limited peer-recognized track records.
THEORY AND HYPOTHESES
Social Valuation
Social valuation1 is a practice whereby an entity is compared to other entities in a social context, using a single referent or set of referents (Karpik, 2010; Lamont, 2012). Modern life involves many instances of social valuation, including school admission, job recruitment, and product ratings. Social valuation involves attributing a certain value to an individual candidate, a group (e.g., an organization), or a piece of work (e.g., a product, a book, a research article) presented by one or several candidates. The referents for valuation are historically contingent on cognitive models that are intersubjectively shared among audiences, assuming at least temporary permanence (Hsu, Roberts, & Swaminathan, 2012; Zuckerman, 2012). For instance, a firm can be compared to others by judging it in terms of the alternative referents of profitability, innovativeness, or, alternatively, environmental sustainability (Dubuisson-Quellier, 2013).
Yet, the value of an entity against a given yardstick is often not directly observable. In this case, evaluators rely on socially endogenous inferences; that is, they take the opinion of other evaluators into account when forming their own judgment (Salganik et al., 2006; Zuckerman, 2012). This means that evaluators infer other audience members’ opinions from observable transactions, such as the purchase of a product, contracts, or awards. In this way, socially endogenous inferences underpin phenomena such as music purchases (Salganik et al., 2006), stock market bubbles (Zuckerman, 2012), and the Matthew effect in science, famously illustrated by Merton (1968: 57): “The world is peculiar in this matter of how it gives credit. It tends to give the credit to [already] famous people.”
Socially endogenous valuation is driven by social influence (Asch, 1956; Cialdini & Trost, 1998; Rao, Greve, & Davis, 2001). The mechanism informing social influence here is informational in nature; evaluators agree with the view of others because they assume this enhances the accuracy of their judgments or actions (Banerjee, 1992; Cialdini & Goldstein, 2004). This is particularly pronounced when it is costly to collect all the required information about the worth of a candidate, or the latter’s quality is hard to observe. In this situation, evaluators use the imputed calculation of worth that informs others’ decisions to make their own valuation of a candidate’s presumed “quality.”
An assumption underpinning socially endogenous inferences is that audience members have “similar desires or needs to that of the decision-maker” (Zuckerman, 2012: 228). Audiences hold theories of value (Zuckerman & Rao, 2004) that are founded on specific shared cognitive frameworks or institutional logics (Thornton, 2004), which define what is deemed valuable and provide yardsticks for gauging success (Friedland, 2013; Glynn & Lounsbury, 2005). The adherence to shared theories of value will apply particularly to closely knit audiences, such as peers (e.g., members of a profession); in this case, while assuming different roles, evaluators and candidates belong to the same community of equals (Wijnberg, 1995). Instances of peer valuation include the evaluation of motion pictures by film professionals (Cattani et al., 2014), the appreciation of theater plays considered for awards (Shymko & Roulet, 2017), and the peer review process in academia (Lamont, 2009). Peers are conscious that they are embedded in a strong culture of shared values and quality standards, and hence will be particularly prone to follow the lead of fellow audience members when evaluating candidates.
However, it is less clear how valuation plays out where a candidate faces multiple audiences (Fombrun & Shanley, 1990). For instance, firms are evaluated by consumers and by investors (Pontikes, 2012), artists operate under the scrutiny of both museums and art galleries (Ertug et al., 2016), and academic researchers face academic peers as grant application evaluators and private sector firms as contract research clients (Bozeman & Gaughan, 2007). Appealing to external (nonpeer) audiences will allow candidates to attract additional resources and build resilience by diversifying into different markets. Yet, different audiences also adhere to different valuation criteria, which may pose challenges to audience diversification. Ertug et al. (2016) showed that museums and galleries use different yardsticks for gauging an artist’s reputation, and Durand and Hadida (2015) suggested that occupational communities punish actors for deviating from their canon of expectations. Shymko and Roulet (2017) found that artistic organizations are penalized by their peers if they seek recognition from corporate actors because they are seen to violate the norms inherent in the artistic logic.
The consideration of external audiences leads to the question of whether and how a peer audience evaluating a candidate will be influenced by the evaluations given to that candidate by an external, nonpeer audience. We approach this question by examining the case of academic science where scientists address two distinct audiences—their academic peers and external industry evaluators—to acquire research funding. We first describe the study context before building our hypotheses.
ACADEMIC RESEARCH FUNDING
Academic scientists appeal to multiple stakeholders as they pursue their research agendas. Most commonly, academics seek positive valuations from their peers when bidding for research grants from government and charitable research funders (Lamont, 2009). Simultaneously, many academics turn to nonpeer audiences for research resources or channels through which they can generate impact from their research. One such external audience is what we refer to as “industry;” that is, organizations involved in the creation and provision of products and services and for whom scientific research often constitutes an important input for their research and development (R&D) activities (Mansfield, 1995)—for instance, via licensing of university inventions (Kotha, George, & Srikanth, 2013). Overall, then, scientists are evaluated by both academia (peer audience) and industry (external audience), generating observable appreciations expressed as awarded grants and awarded contracts, respectively. Below, we comment on the relevance of both types of audiences for academic scientists.
Public funding represents the most important resource for academic research. In the United States (National Science Foundation, 2015) and United Kingdom, respectively, 64% and 55% of universities’ R&D expenditure is government funded (Hughes, Kitson, Bullock, & Milner, 2013). In both countries, income from research funding charities constitutes an additional sizable proportion of universities’ research income. By contrast, for universities funding from business accounts for approximately 4–5% of their research expenditure in both the United States and United Kingdom (OECD, 2016).
Public funding for scientific research is almost always provided in the form of grants, while funding from knowledge users (which may include private or public corporations) tends to be commissioned via contracts. Grants are awarded by government or charitable foundations with the purpose of supporting academic research, which is understood as research conducted to advance knowledge and generate positive public outcomes (Freeman & Van Reenen, 2009; Jacob & Lefgren, 2011). Grants do not normally stipulate formal deliverables and are often relatively flexible in terms of research focus and resource deployment (Bozeman & Gaughan, 2007). Governments fund the generation of knowledge because private sector entities have few incentives to conduct basic research, and even fewer for publishing their results openly in scientific journals (Dasgupta & David, 1994). Grants form the material cornerstone of public science, a professional system where scientists compete for priority and where the rewards for priority primarily consist of the reputation and status accumulated by scientists as a function of their scientific achievements (Merton, 1973).
Governments and other public science funders have delegated the allocation of grants to the public science system by using peer review to inform allocation decisions; applications are evaluated by scientists who are experts in the relevant fields, though not socially or organizationally close to the evaluated scientist (Chubin, 1994). Unlike the peer review of scientific papers, the evaluator is generally aware of the applicant’s identity, as grant applications are proposals for future work, and thereby their success depends on the imputed ability of the applicant to generate a favorable outcome from this work (Li, 2017; Li & Agha, 2015); thus, the evaluator implicitly makes a judgment on the value they attribute to the applicant.
Conversely, contracts are commissioned by user organizations because they require research to complement their internal horizon-scanning or problem-solving activities, rather than primarily for the purpose of propelling the frontiers of knowledge (Perkmann & Walsh, 2009). Research projects funded by contracts stipulate specific deliverables and are more frequently applied in nature than projects funded by grants (Bozeman & Gaughan, 2007; Van Looy, Ranga, Callaert, Debackere, & Zimmermann, 2004). This difference is also reflected in the outcomes of contract-funded projects compared to grant-funded projects. For instance, data from the University of California shows that industry-funded projects more often result in licenses compared to publicly funded projects (Wright, Drivas, Lei, & Merrill, 2014), suggesting that industry projects focus on technology development rather than basic science.
The process of how contracts are awarded differs markedly from how grants are allocated (Goldfarb, 2008). While scientific quality and contribution may play a role, the main criterion applied is how a piece of research would further the organization’s ongoing R&D agenda or fulfill other concrete knowledge requirements. This differs from the yardstick applied to grants, which prioritizes (expected) contribution to knowledge. Moreover, contract proposals are usually not sent out for academic peer review but are evaluated by experts inside the awarding organization. Similar to grant allocation, however, contract proposals are forward-looking and are not anonymous; therefore, the decision will be centrally influenced by the confidence the contracting party has in the candidate to deliver the proposal.
Overall, grants and contracts are awarded by two distinct audiences, each using their own process and criteria to value the scientist. Crucially, information on previous evaluations made by one audience is likely to be available to the other audience for evaluation decisions. Specifically, academic peer reviewers will evaluate a peer on the basis of all information they have available on the candidate, including the outcome of previous evaluations conferred by their academic peers (number of grants), as well as previous evaluations conferred by the industry audience (number of contracts). This feature makes this empirical context suitable to address our research question.
The Peer Evaluation of Academic Scientists
Our baseline scenario is that academic scientists are evaluated for the purpose of being allocated research grants by their peers who judge them, inter alia, on the basis of previous evaluations conferred by industry. We use terminology that defines academic scientists who apply for research grants as “candidates,” who are evaluated by their scientific peers, which we refer to as “peer audience.” Furthermore, candidates have observable previous evaluations (e.g., contracts) conferred by industry, which constitutes an “external audience.”
Grant evaluators face the daunting task of having to decide on the allocation of resources to scientific projects that, by definition, are yet to be fully designed and implemented. They have to judge quality ex ante; as a research project is subject to variations and adjustments, its boundaries are likely to evolve and be redefined, and success may consist of multiple possible outcomes. In judging whether a project will succeed, it will be important for evaluators to consider the applicant and their previous track record, in addition to the submitted project description. Evidence from the interviews that we conducted for this project supports this conjecture. Echoing others, a professor specializing in infection, stated: “When I review grant proposals, I evaluate more the person than the grant [proposal] itself. If the person has demonstrated that they can deliver good science, then the content is less important.” The uncertainty surrounding outputs from a promised future research project makes socially endogenous inferences about the applicants more salient, compared to ex post valuation of completed outputs as practiced in the journal publication process. While the double-blinded review process for evaluating scientific manuscripts is designed to shield them from socially endogenous inferences, grant evaluators are explicitly encouraged to use the information at hand, primarily provided through the application file, to form their opinion about the applicants (Laudel, 2006).
Absent direct observation, the applicants’ background provides critical clues; it delivers a set of critical “indices”—unalterable features determined in the past2—that evaluators can use to complete their assessment of the candidates. Such indices include personal attributes, such as age and gender, and biographical information about past achievements, including degrees, positions, accolades, and performance records. Past works have referred to the latter as signals of unobservable abilities (Podolny, 2005); these reputation signals are based on personal merit and past achievements, whereas status signals originate in affiliations with established social hierarchies (Washington & Zajac, 2005). Stern, Dukerich, and Zajac (2014), for instance, identified life scientists’ past publications and citations as reputation signals, and school affiliations as status signals. While indices might not all be signals as per Spence’s (1973) definition (i.e., costly to produce and manipulable by the candidate), they are key to the evaluation process in that they are available to peer evaluators to help complete a partial picture of the candidate.
Observable past audience evaluations—for instance, the number of grants previously conferred—are a specific type of index available to a peer evaluator; these indices are the focus of this paper. These indices—either reported by the candidates or publicly available—are typically positive as grant applications, and thus rejections are not disclosed. The counterfactual is an absence of positive indices—implying that the candidate has either not applied for valuation in the past, or has applied but has been rejected.
We argue that such indices generally provide two types of information to evaluators: indices of ability, and indices of identity conformance. As per previous work on socially endogenous inferences, positive previous peer evaluations positively influence evaluators in their verdict on a candidate because they indicate ability (Zuckerman, 2012); the higher the index, the higher the presumed ability or quality. However, departing from previous work, we argue that previous evaluations also indicate identity conformance: the higher somebody’s past evaluations, the more they conform to the identity expected by the evaluating audience. This is because previous evaluations conferred by peers imply a form of certification, which is a social clue that assists decision making under uncertainty (Polidoro, 2013): high evaluations conferred demonstrate that a candidate has previously sustained the scrutiny of demanding evaluators that apply the strict criteria shared by the community of peers. Overall, then, previous evaluations represent both indices of ability (indicating the candidates’ ability to conduct proper research) and of identity (indicating conformance with the prescribed identity of a scientist). For indices stemming from peer evaluation—termed herein endogenous indices—the index of identity conformance will be aligned with the index of ability, making both indistinguishable in practice.
The situation will be different when a peer evaluator has access to evaluations provided by an external audience. We call these exogenous indices. The distinction between endogenous and exogenous indices is key to the understanding of peer valuation; because exogenous indices originate from an external audience—for example, industry in our case—the indices for ability and identity conformance are no longer aligned. We develop this argument in detail below.
Exogenous Indices and Peer Valuation
First, for a peer evaluator, we expect observable evaluations conferred to a candidate by an external audience to function as an index of ability that is positively viewed.3 Previous research has suggested that when evaluators face uncertainty about the unobserved quality of a candidate, they find the views of another audience informative, particularly if the latter is deemed to control useful information or analytical capabilities (Pollock, Rindova, & Maggitti, 2008). In our case, while industry executives deciding on the attribution of contracts may not all be trained scientists, it is likely peer evaluators believe they value candidates based on some form of relevant merit; for example, the ability to acquire external resources, orchestrate research projects, and successfully manage collaborations (Owen-Smith & Powell, 2001; Siegel, Waldman, Atwater, & Link, 2003). Industry contracts hence increase a scientist’s worth in the eyes of academic audience members as an index of imputed ability, indicating the scientist’s likely future performance in carrying out the research project under evaluation. When read as an index of ability, the industry evaluation of a candidate is positively related to his or her valuation by peers.
Second, however, when read as an index of identity conformance, a high valuation given to a candidate by an external audience—available in the form of exogenous indices—may lead peer evaluators to a different, more negative conclusion. This is because peer evaluators have a distinct understanding of what constitutes appropriate behavior and, by implication, the identity profile of an actor conforming with the institutional logic of the field (Thornton et al., 2012). Such identity considerations will apply particularly to peer audiences where alignment with the appropriate values and norms—e.g., the logic of a profession (Smets, Morris, & Greenwood, 2012)—represents a critical element of social valuation, as is the case in academia. In her detailed in-depth study of academic peer reviewers, Lamont (2009) documented the emotional and interactional nature of peer valuation: peer evaluators rate the candidates against their self-image—“what is most like me” (Lamont, 2009)—and, by extension, perform judgments by asking “is the candidate one of ours?” In other words, peer evaluators define excellence not only in terms of competence but also in terms of conformity to an idealized identity template describing who peers are supposed to be and how they are supposed to behave (Zhou, 2005).
The professional identity of an academic scientist allows some level of appreciation by nonpeer audiences. In many academic fields, there is a general understanding that scientists, as part of their professional activities, regularly engage with industrial actors. This understanding is promoted by government science funding bodies in the quest to make academic research more impactful via patenting, commercialization and open innovation (Mowery & Nelson, 2004), and is also reflected in scientists’ recognition that industry engagement can be instrumental for driving their research agendas (D’Este & Perkmann, 2011).
However, there is also evidence that there are limits to the degree of involvement with industry seen as permissible in the academic system. For instance, Lee (1996) suggested that a certain amount and certain types of industry engagement are seen as legitimate by peers, while an excessive amount is viewed negatively. Similarly, Jain, George, and Maltarich (2009) suggested that academic entrepreneurship is an acceptable ancillary identity for academics but becomes undesirable once it crowds out the default, academic self-understanding of a scientist. Hence, while some level of industry appreciation falls in the range of accepted (but not required) practices as it fits the prototypical identity template of an academic scientist, the accumulation of excessive exogenous indices may raise doubts with academic evaluators, leading them to question the candidate’s adherence to the ethos of public science (Carroll & Swaminathan, 2000; Zuckerman & Kim, 2003). High evaluations given by industry indicate that a candidate pursues activities such as industry-informed research, consulting with industry, and the commercialization of inventions, which are often at odds with the core notions of being an academic (Jain et al., 2009). In the same way that an artistic audience may negatively regard the appreciation of a candidate by commercial actors (Glynn, 2000; Shymko & Roulet, 2017) as a sign of deviation from an artistic identity rooted in uniqueness and aesthetic appeal, academic evaluators may unfavorably judge the accumulation of a candidate’s industry appreciation. As an index of identity conformity, being highly valued by industry therefore indicates deviation from the expected peer identity, and is negatively related to candidates’ valuation by their academic peers.
In all, these arguments suggest that academic evaluators are likely to perceive a candidate’s industry evaluation as an index of both ability and identity conformity. As an index of research abilities, industry contracts positively contribute to the peer valuation of scientists: larger numbers of industry contracts signal higher ability to the evaluators. Yet, industry contracts as an index of (peer) identity conformity have a different, nonlinear relationship with peer valuation. Low levels of industry appreciation are in the range of accepted practices in most academic disciplines, and hence have little impact on valuation. However, the risk of deviance from an expected academic identity becomes increasingly salient to evaluators as candidate scientists add more industry contracts to their résumé.
As shown by Haans et al. (2016), and illustrated in Figure 1, the combination of two distinct effects, a positive linear ability effect (a) offset by an increasingly negative identity effect (b), results in a curvilinear, inverted U-shaped relationship (c). As a result, we expect intermediate levels of industry contracts, as exogenous indices of ability and identity conformity, to be associated with the highest peer evaluation. A moderate level of contracts confirms the competence of a candidate in the eyes of peer evaluators while still being at a level where no significant deviation from the prototypical identity of a scientist is signaled.
Hypothesis 1. There is an inverted U-shaped relationship between the industry evaluation and peer evaluation of academic scientists.

FIGURE 1 Predicted Inverted U-shaped Relationship and Moderations
In the above hypothesis, we argue that a peer audience will discount a candidate that is highly valued by an external audience because the candidate fails to meet the focal audience’s identity expectations. If this argument holds true, the two identified effects should not be unconditional. We discuss two important conditionalities: evaluators’ perceptions of how proximate the external audience’s identity is to their own, and the availability of informative endogenous indices.
The Moderating Effect of Audience Identity Proximity
A critical assumption behind the negative effect of external valuation on peer assessment is that evaluators interpret high external valuation as indicative of a deviation from the expected identity of the candidate; therefore, peers will discount the valuation of the scientists having engaged repeatedly with industry. The magnitude of this discount will depend on the degree to which the identity expectations between the peer and external audiences overlap. We conceptualize this overlap by using the notion of (discipline-level) identity proximity, which we define as the degree to which the peer audience perceives the external audience as having a similar identity to their own (Gioia, Price, Hamilton, & Thomas, 2010; Glynn, 2008; Jourdan, Durand, & Thornton, 2017).
Applied to our context, academic disciplines (and their associated academic communities) vary on how proximate they are to industry (Sauermann & Stephan, 2013). Some scientific fields, such as geoengineering, aeronautical engineering, and many medical areas, are closer to industry as they apply basic research to the solution of technological problems, which is also an objective embraced by industrial R&D laboratories. Notably, compared to others, these fields tend to be more directly relevant to the development of industrial technology (Klevorick, Levin, Nelson, & Winter, 1995) as they devote academic research explicitly to technological applications (Schartinger, Rammer, Fischer, & Fröhlich, 2002). One of the implications of this proximity is that scientists in these areas tend to work closely with industry personnel, jointly conduct research, and coauthor journal publications (Cohen, Nelson, & Walsh, 2002; Mansfield, 1995). Accordingly, the expected identity template of scientists is more accepting of industry involvement and appreciation in these more applied disciplines where the criterion for what constitutes a pattern of expected behaviors, values, and attitudes on the part of a scientist are more similar to those viewed as appropriate within industrial R&D environments, compared to disciplines more distant from industry. By implication, in these latter fields, say particle physics or pure mathematics, academic scientists will arguably be less aligned with industrial R&D executives in terms of judging what constitutes high conformity with the identity expected from a researcher. Among other things, audience distance materializes in disciplines such as mathematics through a lower level of research coauthorship with industry, compared to more proximate disciplines like engineering or medicine (Tijssen, 2012).
We expect audience identity proximity to condition the (negative) identity conformity effect of exogenous indices on peer valuation in such a way that the effect will be considerably reduced in the academic disciplines most proximate to industry. As the audience identity of an academic discipline and industry becomes more proximate, the symbolic boundary (Lamont & Molnar, 2002) separating the peer and the external audience becomes less salient. Receiving positive external valuations is then considered less of an indication of a deviation from the expected identity of a scientist, and induces fewer doubts about the candidate’s commitment to the criteria championed by the academic audience (Lamont, 2009). At the same time, identity proximity is unlikely to significantly affect the positive (ability-informed) effect of external influences on peer valuation. In all disciplines, positive external valuations by industry convey appreciation of the skills and competences of the scientist.
Specifically, we expect variation in audience identity proximity at the discipline level to moderate the inverted U shape between external and peer valuations in two ways (Figure 1). On the one hand, it will result in a flattening or steepening of the U shape (d) predicted in Hypothesis 1; on the other, it will horizontally shift (e) its turning point (for a discussion of the two types of curvilinear moderation effects, see Haans, Pieters, & He [2016]). First, when identity proximity is high, peer reviewers are less sensitive to the negative identity effect of exogenous indices; this means that in our context, the marginal effect of an additional industry contract will decrease. For the inverted U shape, this implies that the curve flattens when audience identity proximity is high. Second, the general acceptance of industry appreciation is higher in the academic disciplines with a closer identity to industry (e.g., engineering or medicine) compared to other disciplines, because working with industry is part of the expected identity template of academic scientists. For the inverted U shape, this implies that the turning point shifts rightwards when identity proximity is high compared other to disciplines. Formally, we expect the following two hypotheses:
Hypothesis 2a. The inverted U-shaped relationship between industry evaluation and peer evaluation of academic scientists is moderated by identity proximity, such that it is attenuated in academic disciplines with high identity proximity with industry, and accentuated in disciplines with low identity proximity with industry.
Hypothesis 2b. The inverted U-shaped relationship between the industry evaluation and peer evaluation of academic scientists is moderated by identity proximity, such that its turning point occurs at lower levels of industry evaluation in disciplines with low identity proximity with industry, and at higher levels of industry evaluation in disciplines with high identity proximity with industry.
The Moderating Effect of Endogenous Indices
The effect of exogenous indices on peer audience valuation is unlikely to be homogenous within a population of candidates. Those with an established reputation for quality and a strong identity within the peer community will be less sensitive to the effect of valuations provided by external audiences compared to other candidates. Accordingly, we expect the relationship between industry valuation and peer valuation to be moderated by scientists’ endogenous indices for quality—that is, measures of quality aligned with the yardstick determined by the peer audience. Overall, academic quality will attenuate the effect of external evaluations on peer valuation. We develop our arguments below.
Since ex ante evaluation occurs in a situation of uncertainty, evaluators will use all information on a candidate that could provide clues about their likely future performance. Extant research has suggested that evaluators often look at candidates’ previous track record within a relevant field as a proxy for unobservable intrinsic quality (Kotha & George, 2012; Stern et al., 2014). This is based on the reasoning that somebody’s value is determined by the importance or quality of their previous actions (Podolny & Phillips, 1996). For scientists, these previous actions primarily refer to their published research output, which routinely undergoes peer review and hence can be regarded as certified according to the socially shared valuation criteria held within the peer audience. One may then argue that the higher the quality of a scientist’s previous actions, the lower the uncertainty an evaluator faces when judging the candidate on future research outcome.
Lower uncertainty on a candidate’s intrinsic quality reduces the need for an evaluator to rely on other available information, such as exogenous indices. The evaluator knows that the latter originate from an external audience and that they represent a proxy for both ability and identity conformity. Since these two dimensions are observationally not distinguishable, a peer evaluator is likely to choose to reduce his or her reliance on exogenous indices where possible, in order to avoid misjudgment of a candidate. This will occur when a candidate’s previous scientific track record appears strong, affecting both the mechanisms we postulate to produce the curvilinear pattern predicted by Hypothesis 1. On the one hand, the positive effect of the ability component of the exogenous indices will be weakened because it will be overridden by peer certification (information on the candidate’s publication record) that is more reliable and fine-grained. On the other hand, the nonlinear negative effect of the identity conformity component of the exogenous indices will be mitigated because a strong academic track record will compensate for any indication that a scientist fails to conform to the ideal identity preferred by the peer audience. Well-published scientists have demonstrated a focused academic identity, and their evaluation is likely to be subject to typecasting, whereby their academic identity becomes sticky and resilient to disturbances such as engagement with industry (Zuckerman, Kim, Ukanwa, & Von Rittmann, 2003), which makes them relatively immune to the identity-diluting effects of being highly appreciated by industry. Hence:
Hypothesis 3a. The inverted U-shaped relationship between industry evaluation and peer valuation of academic scientists is moderated by the quality of their publishing track record, such that it is attenuated when quality is high, and accentuated when quality is low.
An alternative way for the peer audience to judge the intrinsic quality of a peer candidate is to consider the consistency, rather than aggregate quality, of his or her publishing track record. If the past performance of a candidate is irregular, peer reviewers will be more uncertain about their future expected performance. This matters because, for the evaluation of a grant proposal, funding agencies will prefer candidates who are likely to deliver the work promised. For instance, the United Kingdom’s Medical Research Council requires academic grant application reviewers to consider whether applicants are “best-placed to deliver the proposed research” (https://www.mrc.ac.uk/documents/pdf/reviewers-handbook). The consistency of a candidate’s record of production is considered by reviewers as an endogenous index of likely future performance. Its effect is analogous to the effect of quality of an academic’s publishing record. The higher the irregularity of a scientist’s publishing record, the lower their imputed quality. Hence, we argue:
Hypothesis 3b. The inverted U-shaped relationship between industry evaluation and peer valuation of academic scientists is moderated by the irregularity of their publishing track record, such that it is attenuated when irregularity is low, and accentuated when irregularity is high.
DATA AND METHODS
Study Context, Sample, and Data
Studying how industry evaluation affects the peer evaluation of academic scientists is challenging in terms of data requirements. While awarded grants may be known, data on unsuccessful grant applications are often neither disclosed by the funders, nor reported by the scientists. Furthermore, data on the industry evaluation of academic scientists are rarely disclosed, or are subject to censoring as only the most prominent (and larger) collaborations with industry are publicized.
For the purpose of this study, we assembled a unique dataset on the full population of academic scientists employed by Minerva (pseudonym), a large U.K. research university. The university has approximately 15,000 students and 3,700 academic staff, and is a top recipient of competitive government science funding in the United Kingdom Minerva espouses scientific excellence as its guiding core value, and this criterion is central for hiring and promotions, as well as for organization-level decisions on, for instance, the establishment of new centers and allocation of internal resources. Simultaneously, the university statutes define a strong mandate to render scientific knowledge useful via application for the benefit of industry and society. The university has built a significant commercialization subsidiary and operates a large, centrally located unit tasked with helping faculty to attract private sector funding and retaining industrial partners by professionally managing client accounts. As a result, Minerva is a large recipient of industry funding, which amounts to 7% of its research income. Results from a 2013 survey conducted by the authors among the Minerva faculty suggest that while collaboration with industrial partners is seen as mission-critical both in terms of its resource contribution and its effect on increasing the impact of scholarly research, it is perceived as serving the ultimate purpose of advancing the frontiers of science, rather than as a goal in itself.
Building on Minerva administrative records, we collected year-by-year information on all 9,502 academics employed by the university between 2001 and 2012. For each individual scientist, we gathered full information on their research funding applications, successful or not. This includes information on individuals’ grant applications (28,579), as well as their successfully awarded grants (7,427), and all the industry contracts (1,817) awarded by private sector firms, public health organizations and hospitals, and central government and authorities. We also accessed information held in Minerva records on scientists’ organizational demographics, including their rank, departmental affiliation, and length of tenure. We extracted bibliographic data on individuals’ publications from a system that mandates individuals to edit their publication records as harvested from the Institute for Scientific Information (ISI) Web of Knowledge and Pubmed, and publish them as an edited, approved list on their personal Minerva webpage. This means that our publication records are author approved, and hence more accurate than records downloaded from bibliographic databases, which frequently suffer from name disambiguation issues (Azoulay, Stellman, & Zivin, 2006). To each journal publication record, we added journal-specific bibliometric information provided by ISI Web of Knowledge, such as journal subject categories and year-specific journal impact factors. We further used data on individuals’ industry links as presented on their university webpages, and their patents. Our effort resulted in an unbalanced panel dataset of 34,647 scientist–year observations.
Dependent Variable
Peer evaluation.
To account for the evaluation of a scientist by their peer (academic) audience, we consider the count of grants awarded to each scientist, as a principal investigator, in a given year. The values for this variable range from 0 to 23 in the observation period. The number of grants is an appropriate measure of peer evaluation because each additional grant is based on an additional attribution of value to the researcher by a panel of peer reviewers.
Independent Variables
Industry evaluation.
Our theory suggests that a scientist’s evaluation by the external (industry) audience affects their evaluation by the peer (academic) audience. When evaluating a grant application by a scientist, reviewers are provided with résumés, which contain information about the applicant’s record of funding from various sources. We measured the scientist’s industry evaluation by cumulating the number of contracts acquired by them as a principal investigator up to the year under scrutiny (see Dokko & Gaba, 2012, for a similar approach).
Identity proximity with industry.
In Hypotheses 2a and 2b, we expect our main effect to be moderated by the proximity of a candidate to industry. To operationalize identity proximity with industry, we selected a measure that indicates a candidate’s membership in a disciplinary grouping, rather than an individual measure of proximity, because grant applications are evaluated by members of those broader disciplinary groupings. We use the faculty affiliation of the scientists in our sample as a proxy for their identity proximity: Interviews with Minerva scientists across all disciplines—conducted in the context of an ongoing inductive study about university–industry collaboration (Perkmann, McKelvey, & Phillips, in press)—revealed that engineering and medicine were the most proximate disciplines to industry, while natural sciences and business were more distant. Accordingly, we created a dummy variable, labeled identity proximity, which equals 1 when the scientist belongs to the faculties of engineering and medicine, and 0 otherwise.
We validated the identity proximity variable by using a discipline-specific measure of proximity based on university–industry collaboration intensity (Tijssen, 2012). Calculated for each of the ISI journal categories, the measure ranges between 0 and 0.17, and captures the share of industry-affiliated authors in all papers published in any given ISI journal subject category between 2009 and 2013. Applying the industry proximity values of each journal category to the publication record of each scientist and calculating the mean for each faculty, we obtained the following values: 0.071 for engineering, 0.061 for medicine, 0.046 for natural science, 0.039 for business. These measures justify our decision to attribute high industry proximity to members of the two former faculties, and low industry proximity to members of the latter ones.
Quality of publishing record.
To examine Hypothesis 3a, we measured the quality of a scientist’s publication record by using their cumulative impact factor—i.e., the sum of the impact factors of the journals in which each of their publications appeared, up to the focal year. Since 1997, the ISI Web of Knowledge has released journal impact factors for both science and social science titles. We downloaded this information for about 11,000 journals, for each year between 1997 and 2012, from the ISI Web of Knowledge website. The impact factor of the journal in which an article is published is a widely accepted measure of the quality of the article because journals with higher impact factors are perceived as more prestigious and impose stricter quality criteria on the manuscripts they receive (Toole & Czarnitzki, 2010). Therefore, scientists with a higher cumulative impact factor will be regarded as being of higher quality by their academic audience. We use the natural logarithm of the cumulative impact factor in our models.
Irregularity of publishing record.
To test Hypothesis 3b, we measure irregularity of a scientist’s record by dividing the total number of years after their first publication in which they published no articles in a journal with an ISI impact factor by the scientist’s academic age. The variable ranges between 0 and 1; scientists with an output of at least one annual publication receive a score of 0, while their score moves closer to 1 the more irregularly they publish.
All independent variables are lagged by one year (t-1) in our estimations.
Control Variables
We include in our models a number of individual-level controls that may affect the relationship under examination. We also include dummy variables for departmental affiliation.
Academic age.
Because scientists’ incentives to bid for research funding may depend on the stage of their career (Thursby, Thursby, & Gupta-Mukherjee, 2007), we control for their academic age, operationalized as the number of years since their first publication at t-1.
Tenure.
The duration of employment at a university may influence the peer evaluation, as peer reviewers may perceive more mobile academics as more dynamic and ambitious than others. We therefore control for each scientist’s number of years of employment at Minerva in t-1.
Grant proposals filed.
Scientists who are more active in applying for research funding are more likely to receive grants compared to those who bid less; we therefore include the number of grant proposals filed by each scientist in t-1, operationalized as the time-variant categorical variable (Azoulay, Ding, & Stuart, 2009).
Previous peer evaluation.
Peer evaluators may favor scientists who show a positive track record in research grant acquisition (socially endogenous inference); we control for the cumulative number of grants awarded to each scientist up to t-1.
Team quality.
As evaluators’ decision may be driven by observable characteristics of previously awarded grants (Criscuolo, Dahlander, Grohsjean, & Salter, 2017), for any given grant awarded in t, we calculate a principal component capturing (i) the cumulative number of grants awarded up to t-1, (ii) the cumulative monetary value of the grants awarded up to t-1, and (iii) the cumulative publication impact factor of the articles published up to t-1, for all Minerva principal investigators and coinvestigators linked to the grant. We include the first principal component (eigenvalue = 1.96) among the covariates.
Patents.
Patenting activity may be considered by evaluators when judging a grant proposal. To control for this, we account for the cumulative number of European Patent Office patents granted to each scientist up to t-1, as shown in the Espacenet database.
Industry experience.
Academic peers may take industry experience into account when evaluating candidates, and we therefore include a count of the number of years of collaboration with industrial partners, as stated on scientists’ official Minerva personal webpages, up to t-1.
Empirical Method
We proceed in three steps. We first conduct a panel data analysis of the full population of scientists at Minerva over 2001–2012, controlling for self-selection into grant applications, unobserved heterogeneity, and autocorrelation. After submitting our findings to a battery of robustness checks, we conduct two confirmatory analyses. First, we seek to replicate our results using a coarsened exact matching procedure designed to rule out remaining concerns relating to unobserved abilities and interests that may both affect the industry evaluation and the peer evaluation of the scientists we study. Second, we use interview material to further corroborate and flesh out our quantitative findings.
Full panel analysis.
The relationship between industry evaluation and peer evaluation should be assessed with care. First, not all scientists apply for grants, meaning that our dependent variable (peer evaluation) is only observed for a subsample of the population (Certo, Busenbark, Woo, & Semadeni, 2016). Furthermore, scientists who compete for grants do so based on factors that may also affect peer evaluation. To address potential selection bias, we estimate scientists’ probability of submitting at least one grant proposal as a principal investigator, using a Probit model, and compute a correction factor (inverse Mills ratio) to be included in the second-stage outcome equation (Heckman, 1979) (see Table A1).
Because peer evaluation may be affected by unobserved factors that could also influence industry evaluation, and is also subject to auto-correlation (i.e., peer evaluation in year t is dependent on peer evaluation in t-1), we employ a Poisson model with continuous endogenous covariates using an iterative generalized method of moments (GMM) estimator (Wooldridge, 2008). GMM models are suitable for panel datasets with a large number of individuals (n) and a limited number of time periods (t) (Bascle, 2008). We prefer a Poisson model to negative binomial regression as for less than 20% of the scientists included in the analysis the conditional variance exceeds the conditional mean.
The selected method requires instruments—i.e., variables not correlated with the error term of the outcome equation but correlated with the predictors suspected of being endogenous. As instruments for the three endogenous variables (i.e., peer evaluation, industry evaluation, and industry evaluation squared), we use (i) the logarithmic transformation of the total amount of money awarded by the U.K. government’s science funding bodies up to the focal year (U.K. Government, 2013); (ii) the yearly percentage of U.K. gross domestic expenditure on R&D in the business sector (Eurostat, 2015), and (iii) a dummy variable switching from 0 to 1 in 2004, reflecting the introduction of a software system at Minerva that streamlined the administrative process of grant application and contracts administration. Instrument (i) is based on the consideration that the stock of historically available public research findings is related to a scientist’s previous grant acquisition, but not to grant acquisition in a focal year. Instrument (ii) reflects that fact that gross domestic expenditure on R&D in the environment of the university will affect academics’ chances of acquiring an industry contract (without affecting success in grant acquisition). Instrument (iii) is based on the consideration that the introduction of a software system created an exogenous shock in the university research funding management, affecting the stock of grants and industry contracts, but not the likelihood of obtaining a grant in the focal year. In the endogenous variable equation, we further include as covariates the two-year lagged values of the three endogenous variables, the team quality for all grants awarded up to t-2 (which we expect to be correlated with our endogenous variables at t-1 but not to the dependent variable (DV) in t), as well as the two-year lagged values of industry evaluation and industry evaluation squared, interacted with their boundary conditions (Abadie, 2003). As recommenced (Bascle, 2008), we report the Hansen’s J statistic, which is used to determine the validity of the overidentifying restrictions in GMM Poisson models. The J statistic is not significant (p > 0.05) in any model, suggesting that they are correctly specified.
Coarsened exact matching analysis.
Although all our tests as recommended by the literature suggest valid results, the soundness of the panel dataset analysis of the full population of Minerva scientists is conditional on the relevance and strict exogeneity of the instruments used (Bascle, 2008). In order to further rule out alternative explanations, such as variations in peer evaluation as a result of scientists’ unobserved abilities and interests, rather than changes in their industry evaluation, we resort to a matching procedure that compares the scientists with at least one industry contract during the observation period (“treated”) against the most similar individuals without any industry contracts (“untreated”).4
We employ coarsened exact matching, a nonparametric matching procedure (Blackwell, Iacus, King, & Porro, 2009). Compared to a propensity score matching approach (Rubin, 2001), the coarsened exact matching method ensures that the inequality between the treated group and the control group is not greater than an ex ante user choice (Iacus, King, & Porro, 2012). This suits our purpose as we aim to precisely control the matching process with respect to treated scientists’ characteristics at a specific point in time—i.e., in the year before they sort into their first contract. Following Azoulay, Stuart, and Wang (2014), we begin by identifying a set of covariates to be kept in balance between the treated and the control groups: faculty of affiliation, year of observation, previous peer evaluation, patents, industry experience, tenure and team quality. We chose these variables in order to determine individuals who are exposed to the same set of opportunities and incentives (faculty affiliation), at the same time (year), with similar academic track records in terms of grant acquisition (previous peer evaluation), preferences for commercialization (patents and industry experience), seniority in the organization (tenure), and academic quality of their collaborators (team quality). We match the first two variables (faculty affiliation and year) exactly. We coarsen the latter five variables, creating four equally spaced cutpoints including the extreme values; this means observations are divided into three groups for each of these variables. Each observation is then allocated to a specific stratum obtained from the intersection of the specified intervals for all seven variables. A treated observation is matched to an untreated one only if both observations—treated and matched—are allocated to the same stratum (if not, the treated observation remains unmatched). The procedure results in the identification of 54 strata with at least two observations (treated and matched individuals). This allows us to match 410 individuals out of the 552 treated ones (74%), resulting in 3,091 person–year observations for a sample of 820 individuals. We then reestimate Poisson models using the same specifications employed for the full sample analysis.
Interpretation of the results via interviews.
As an additional step, and in line with an explanatory sequential design (Creswell, 2014; Kapoor & Klueter, 2015), we carried out interviews in order to elucidate and validate our postulated theoretical mechanisms. We conducted 10 interviews with scientists involved in reviewing grant proposals. The interviews lasted from 15 to 45 minutes. Informants are from a variety of disciplinary backgrounds—reflecting the distribution of disciplines at Minerva—and regularly review grant proposals for a variety of research funding organizations. We used a semi-structured interview guide covering key questions underpinning our postulated mechanisms.
RESULTS
Results of Full Panel Analysis
Table 1 shows summary statistics and pairwise correlations for the variables included in our main specification. Table 2 exhibits the full sample results for the iterative Poisson GMM estimator. The baseline models (i.e., Models 1 and 2 without inverse Mills ratio and Models 3 and 4 including the inverse Mills ratio) assess the effect of an individual’s industry evaluation on their evaluation by academic peers (peer evaluation). Models 5 to 7 test the moderating effects of identity proximity with industry, quality of publishing record, and irregularity of publishing record; Model 8 shows the fully specified model.
Mean | SD | Min. | Max. | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 Peer evaluation (no. of grants) | 0.90 | 1.37 | 0.00 | 23.00 | 1.00 | ||||||||||||||||
2 Inverse Mills ratio | 0.45 | 0.13 | 0.29 | 0.80 | −0.23 | 1.00 | |||||||||||||||
3 D: Grant proposal filed [1 & 2] | 0.56 | 0.50 | 0.00 | 1.00 | −0.27 | 0.38 | 1.00 | ||||||||||||||
4 D: Grant proposal filed [3 to 7] | 0.35 | 0.48 | 0.00 | 1.00 | 0.13 | −0.24 | −0.83 | 1.00 | |||||||||||||
5 D: Grant proposal filed [8 to 14] | 0.07 | 0.26 | 0.00 | 1.00 | 0.23 | −0.21 | −0.32 | −0.21 | 1.00 | ||||||||||||
6 D: Grant proposal filed [15 and above] | 0.02 | 0.12 | 0.00 | 1.00 | 0.13 | −0.12 | −0.14 | −0.09 | −0.04 | 1.00 | |||||||||||
7 Academic age | 16.55 | 8.88 | 0.00 | 57.00 | 0.06 | −0.24 | −0.08 | 0.04 | 0.07 | 0.05 | 1.00 | ||||||||||
8 Tenure | 12.40 | 8.31 | 2.00 | 50.00 | 0.01 | −0.04 | −0.01 | −0.01 | 0.03 | 0.02 | 0.61 | 1.00 | |||||||||
9 Patents | 0.38 | 1.37 | 0.00 | 22.00 | 0.19 | −0.16 | −0.10 | 0.03 | 0.11 | 0.05 | 0.14 | 0.16 | 1.00 | ||||||||
10 Industry experience | 0.23 | 1.54 | 0.00 | 35.00 | 0.03 | −0.10 | −0.03 | 0.03 | 0.02 | 0.00 | 0.05 | 0.04 | 0.17 | 1.00 | |||||||
11 Team quality | −2.06 | 4.73 | −6.15 | 141.7 | 0.28 | −0.27 | −0.29 | 0.11 | 0.24 | 0.24 | 0.13 | 0.08 | 0.28 | 0.07 | 1.00 | ||||||
12 Previous peer evaluation | 4.90 | 5.70 | 0.00 | 65.00 | 0.37 | −0.46 | −0.39 | 0.14 | 0.34 | 0.29 | 0.31 | 0.23 | 0.29 | 0.10 | 0.58 | 1.00 | |||||
13 Identity proximity with industry | 0.73 | 0.45 | 0.00 | 1.00 | 0.08 | −0.14 | −0.13 | 0.07 | 0.10 | 0.06 | −0.07 | 0.02 | 0.00 | 0.02 | 0.08 | 0.06 | 1.00 | ||||
14 Quality of publishing record | 4.12 | 1.52 | 0.00 | 8.02 | 0.26 | −0.60 | −0.29 | 0.15 | 0.20 | 0.14 | 0.32 | 0.20 | 0.18 | 0.14 | 0.35 | 0.48 | 0.03 | 1.00 | |||
15 Irregularity of publishing record | 0.09 | 0.18 | 0.00 | 1.00 | −0.15 | 0.48 | 0.15 | −0.11 | −0.08 | −0.04 | −0.40 | −0.25 | −0.10 | −0.05 | −0.15 | −0.25 | −0.01 | −0.51 | 1.00 | ||
16 Industry evaluation | 1.17 | 2.75 | 0.00 | 37.00 | 0.15 | −0.19 | −0.19 | 0.09 | 0.13 | 0.15 | 0.19 | 0.16 | 0.14 | 0.12 | 0.20 | 0.29 | 0.14 | 0.21 | −0.12 | 1.00 | |
17 Industry evaluation × industry evaluation | 8.94 | 54.21 | 0.00 | 1,369 | 0.07 | −0.09 | −0.12 | 0.06 | 0.07 | 0.10 | 0.11 | 0.07 | 0.05 | 0.04 | 0.14 | 0.16 | 0.08 | 0.13 | −0.05 | 0.85 | 1.00 |
DV = Peer evaluation | Model 1 | Model 2 | Model 3 | Model 4 | Model 5 | Model 6 | Model 7 | Model 8 |
---|---|---|---|---|---|---|---|---|
Inverse Mills ratio | −0.204 | −0.185 | −0.178 | −0.072 | −0.173 | −0.092 | ||
(.234) | (.233) | (.233) | (.232) | (.232) | (.231) | |||
D: Grant proposal filed [1 & 2] | −0.469*** | −0.466*** | −0.460*** | −0.458*** | −0.458*** | −0.442*** | −0.457*** | −0.439*** |
(.047) | (.047) | (.048) | (.048) | (.048) | (.048) | (.048) | (.047) | |
D: Grant proposal filed [8 to 14] | 0.262*** | 0.260*** | 0.261*** | 0.258*** | 0.258*** | 0.266*** | 0.261*** | 0.266*** |
(.055) | (.055) | (.055) | (.055) | (.055) | (.055) | (.055) | (.055) | |
D: Grant proposal filed [15 and above] | 0.133 | 0.129 | 0.14 | 0.136 | 0.134 | 0.165 | 0.127 | 0.173 |
(.144) | (.145) | (.144) | (.145) | (.146) | (.142) | (.147) | (.141) | |
Academic age | −0.005† | −0.005† | −0.005† | −0.006† | −0.006† | −0.006* | −0.006† | −0.006* |
(.003) | (.003) | (.003) | (.003) | (.003) | (.003) | (.003) | (.003) | |
Tenure | −0.014*** | −0.015*** | −0.014*** | −0.014*** | −0.015*** | −0.015*** | −0.014*** | −0.016*** |
(.003) | (.003) | (.004) | (.004) | (.004) | (.003) | (.003) | (.003) | |
Irregularity of publishing record | −1.105*** | −1.099*** | −1.081*** | −1.076*** | −1.068*** | −0.992*** | −1.141*** | −1.023*** |
(.227) | (.226) | (.231) | (.231) | (.230) | (.232) | (.237) | (.244) | |
Quality of publishing record | 0.117*** | 0.115*** | 0.108*** | 0.107*** | 0.106*** | 0.144*** | 0.109*** | 0.144*** |
(.026) | (.026) | (.028) | (.028) | (.027) | (.028) | (.028) | (.028) | |
Patents | 0.052*** | 0.051*** | 0.052*** | 0.050*** | 0.049*** | 0.053*** | 0.051*** | 0.052*** |
(.014) | (.015) | (.014) | (.015) | (.015) | (.015) | (.015) | (.015) | |
Industry experience | −0.011 | −0.014 | −0.012 | −0.014 | −0.011 | −0.01 | −0.015 | −0.013 |
(.014) | (.014) | (.014) | (.014) | (.015) | (.013) | (.014) | (.021) | |
Identity proximity with industry | 0.224* | 0.231* | 0.220* | 0.228* | 0.255* | 0.231* | 0.226* | 0.223* |
(.097) | (.097) | (.097) | (.097) | (.110) | (.098) | (.096) | (.110) | |
Team quality | −0.001 | −0.001 | −0.001 | −0.001 | −0.001 | −0.001 | −0.001 | −0.001 |
(.003) | (.003) | (.003) | (.003) | (.003) | (.004) | (.003) | (.003) | |
Previous peer evaluation | 0.025*** | 0.024*** | 0.025*** | 0.024*** | 0.024*** | 0.026*** | 0.024*** | 0.025*** |
(.005) | (.005) | (.005) | (.005) | (.005) | (.005) | (.005) | (.005) | |
Industry evaluation | 0.007 | 0.034* | 0.007 | 0.034* | 0.163* | 0.260*** | 0.030* | 0.340*** |
(.005) | (.014) | (.005) | (.014) | (.072) | (.063) | (.014) | (.094) | |
Industry evaluation × industry evaluation | −0.001* | −0.001* | −0.029** | −0.011** | −0.001† | −0.037** | ||
(.001) | (.001) | (.011) | (.004) | (.001) | (.012) | |||
Industry evaluation × identity proximity with industry | −0.128† | −0.108 | ||||||
(.072) | (.073) | |||||||
Industry evaluation × industry evaluation × identity proximity with industry | 0.027* | 0.028* | ||||||
(.011) | (.012) | |||||||
Industry evaluation × quality of publishing record | −0.043*** | −0.037** | ||||||
(.011) | (.011) | |||||||
Industry evaluation × industry evaluation × quality of publishing record | 0.002** | 0.002*** | ||||||
(.001) | (.000) | |||||||
Industry evaluation × irregularity of publishing record | 0.443† | 0.405* | ||||||
(.251) | (.192) | |||||||
Industry evaluation × industry evaluation × irregularity of publishing record | −0.047** | −0.041** | ||||||
(.018) | (.013) | |||||||
Department fixed effects | Included | Included | Included | Included | Included | Included | Included | Included |
Constant | −0.450** | −0.456** | −0.326† | −0.344† | −0.368† | −0.583** | −0.362† | −0.569** |
(.139) | (.139) | (.197) | (.196) | (.20) | (.204) | (.196) | (.206) | |
Number of observations | 5,131 | 5,131 | 5,131 | 5,131 | 5,131 | 5,131 | 5,131 | 5,131 |
Number of individuals | 1,571 | 1,571 | 1,571 | 1,571 | 1,571 | 1,571 | 1,571 | 1,571 |
Endogenous variables | 2 | 3 | 2 | 3 | 5 | 5 | 5 | 9 |
Instruments (standard) | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
Instruments (lagged variables) | 3 | 4 | 3 | 4 | 6 | 6 | 6 | 12 |
Hansen’s J chi-square | 3.93 | 4.13 | 3.8 | 4.01 | 3.73 | 4.24 | 4.09 | 6.79 |
Hansen’s J df | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 6 |
Hansen’s J p | 0.41 | 0.38 | 0.43 | 0.4 | 0.44 | 0.37 | 0.39 | 0.34 |
In line with Hypothesis 1, Model 4 shows a positive and significant effect of industry evaluation (β = 0.034, p < 0.05), as well as a negative and significant effect of its squared term on peer evaluation (β = –0.001, p < 0.05). To facilitate interpretation of the results, we plot peer evaluation across the range of observed values for an individual’s industry evaluation (number of industry contracts acquired) in Figure 2. As suggested by Lind and Mehlum (2010), we validate the presence of an inverted U shape within an interval of values, testing whether the relationship is increasing at low values within this interval and decreasing at high values within the interval. In our case, the slope at the lower bound (0) is 0.039 (p < 0.01) and at the upper bound (37) is −0.077 (p < 0.05), resulting in a significant overall test for the presence of an inverted U shape (t-value = 2.30; p < 0.05). Furthermore, the estimated turning point is within the range of the variable (12.48, 95% Fieller interval [8.31; 23.62]). The hypothesized inverted U shape between industry evaluation and peer evaluation is thus supported.

FIGURE 2 Industry Evaluation and Peer Evaluation
Notes: Figure is based on Model 4. Marginal effects estimated by keeping the other covariates at their means.
We use Models 5 to 8 to test Hypotheses 2a, 2b, 3a, and 3b. Hypothesis 2a predicts that the inverted U-shaped relationship between industry evaluation and peer evaluation of academic scientists is attenuated in academic disciplines with high identity proximity with industry, and accentuated in disciplines with low identity proximity with industry. This is consistent with Model 5: the interaction term between industry evaluation and identity proximity with industry is negative and (marginally) significant (β = –0.128, p < 0.1), whereas the interaction term between the squared term of industry evaluation and identity proximity with industry is positive and significant (β = 0.027, p < 0.05). To facilitate interpretation, in Figure 3 we plot the relationship between industry evaluation and peer evaluation at a high (= 1) and a low (= 0) level of identity proximity with industry, respectively. The analysis suggests that for industry evaluation lower than three contracts and higher than five, the impact on peer evaluation significantly differs between low and high identity proximity with industry. Furthermore, consistent with Hypothesis 2b, the inflection point moves to the left for the disciplines less proximate to industry. These results suggest a significant effect, on average, of identity proximity on the relationship between peer evaluation and industry evaluation. In industry-proximate disciplines, the number of acquired contracts has an estimated increasingly positive effect on acquired grants up to about 13 industry contracts, and recedes beyond that point. The turning point occurs at a maximum equal to one grant. By contrast, in distant disciplines, it takes only three contracts, approximately, for the effect of industry contracts to become negative; at its maximum, the number of acquired grants is equal to 0.76. For scientists with the average amount of industry contracts (1.17), the estimated average number of acquired grants varies from 0.81 grants in industry-proximate disciplines to 0.70 grants in industry-distant disciplines.

FIGURE 3 Industry Evaluation and Peer Evaluation Moderated by Identity Proximity with Industry
Notes: Figure is based on Model 5. Marginal effects estimated by keeping the other covariates at their means.
Hypothesis 3a predicts that the inverted U-shaped relationship between industry evaluation and peer evaluation of academic scientists is attenuated when the quality of the publishing record is high, and accentuated when quality is low. In Model 6, the interaction term between industry evaluation and quality of publishing record is negative and significant (β = –0.043, p < 0.001), whereas the interaction term between the squared term of industry evaluation and the quality of publishing record is positive and significant (β = 0.002, p < 0.01). To facilitate interpretation, we plot in Figure 4 the relationship between industry evaluation and peer evaluation at a higher (one standard deviation above the mean) and a lower (one standard deviation below the mean) level of quality of publishing record, respectively. The analysis suggests that the impact on peer evaluation significantly differs between low and high quality of publishing record at industry evaluation lower than four contracts and higher than 23 contracts. For scientists with a higher quality record, the number of industry contracts has an estimated increasingly positive effect on the number of awarded grants up to about 11 industry contracts, and recedes beyond that point. The turning point occurs at a maximum of one grant. For lower-quality scientists, the effect of contracts on the number of grants is comparatively lower for lower levels of contracts, reaches a higher maximum (1.1 grants at a level of 10 contracts), and falls faster toward 0 at the right end of contract distribution. These patterns are confirmed in the fully specified Model 8.

FIGURE 4 Industry Evaluation and Peer Evaluation Moderated by the Quality of the Publishing Record
Notes: Figure is based on Model 6. Marginal effects estimated by keeping the other covariates at their means.
Hypothesis 3b predicts that the inverted U-shaped relationship between industry evaluation and peer evaluation is attenuated when the irregularity of an individual’s publishing record is low and accentuated when irregularity is high. As per Model 7, the interaction term between industry evaluation and irregularity of publishing record is positive and (marginally) significant (β = 0.443, p < 0.1). Furthermore, the interaction term between industry evaluation squared and irregularity of publishing record is negative and statistically significant (β = –0.047, p < 0.01), pointing to the strengthening of the inverted U shape as irregularity grows. In Figure 5, we plot the relationship between industry evaluation and peer evaluation at a higher (one standard deviation above the mean) and a lower (one standard deviation below the mean) level of irregularity of publishing record. The analysis suggests that, for low industry evaluation (i.e., fewer than three contracts) and for high industry evaluation (i.e., more than 10 contracts) the impact on peer evaluation significantly differs between low and high irregularity. For scientists with a regular publishing record, the number of contracts has an estimated increasingly positive effect on the number of grants up to about 17 industry contracts, and recedes beyond that point. The turning point occurs at a maximum of 1.1 grants. For scientists publishing less regularly, the effect of contracts on grants is comparatively lower, reaches a maximum at 0.85 grants at a level of five contracts, and falls faster toward 0 at the right end of contract distribution. This confirms that, as with publication quality, the curve is flatter for scientists with a regular publishing record. Contrary to quality, though, we observe a shift of the inflection point to the left as irregularity increases, suggesting that academics who publish less frequently may experience a negative net peer valuation effect at lower levels of industry contracts. The full Model 8 confirms these results.

FIGURE 5 Industry Evaluation and Peer Evaluation Moderated by the Irregularity of the Publishing Record
Notes: Figure is based on Model 7. Marginal effects estimated by keeping the other covariates at their means.
Given the nonlinear nature of our count models, we also compute the linear and squared terms of the secondary effects to evaluate the true value and the significance of the moderating effects across the whole range of the variable distribution (Bowen, 2012: Gruber, MacMillan, & Thompson, 2013). Results confirm that the secondary effects of identity proximity with industry, quality of publishing record, and irregularity of publishing record are in the expected direction and significant (p < .01) in at least 99% of cases, with the exception of the linear effects of identify proximity (p < 0.1) and irregularity (n.s). Overall, these results add support to Hypotheses 2a, 2b, 3a, and 3b.
We conduct further tests to check that our results are robust to changes in specifications. First, we confirm the validity of our GMM Poisson estimation using the procedure recommended by Blattberg, Kim, and Neslin (2008) (see Figure A1). Second, to address standard error inefficiencies related to the inclusion of the inverse Mills ratio in the second-stage outcome equation, we rerun Models 3 to 8, bootstrapping the standard errors 1,000 times and including robust standard errors. Third, we use a negative binomial estimator in lieu of a Poisson model to account for overdispersion (about 20% of the cases in our data). Fourth, we run a subsample analysis leveraging data from a survey administered to all scientists employed at Minerva in 2013 (including 1,352 individuals in our dataset). We use fined-grained data on individual commercial preferences and industry experience to specify three alternative selection models (predicting scientists’ probability of submitting at least one grant proposal). Fifth, we test alternative specifications of industry and peer evaluations: (i) we consider all industry contracts and research grants independently of the scientist’s role (principal investigator or co-investigator), (ii) we exclude funders with fewer than five years of operations in case candidates may not disclose contracts from less well-known (young) firms, (iii) we weight the contracts by the cumulative R&D expenses of funding companies (assessed up to the year in which the contract was awarded in million currency units) because candidates may preferentially disclose contracts from more prestigious R&D intensive companies, (iv) we also rerun the models after dropping from the sample business school academics, which may have specific attributes. Sixth, because journals impact factors are not available for the years before 1997, we computed an alternative measure for irregularity of publishing record, with a denominator bounded to a maximum of 15 years. Seventh, we reestimated the main models using a modified Gram–Schmidt procedure to orthogonalize the moderators and address potential correlation between the three variables (Golub & Van Loan, 1996). Finally, we check whether our results might be affected by outliers by running different censoring analyses: (i) excluding values of industry evaluations above different thresholds, and (ii) winsorizing our dataset, specifying lower and upper cut-off points for industry evaluation (Aguinis, Gottfredson, & Joo, 2013). The results of these tests (reported in Appendix A when indicated, or available upon request) remain consistent with our hypothesized relationships.
Furthermore, we explore whether peer evaluators consider candidates’ industry evaluation relative to their academic evaluation (i.e., as a proportion of combined past industry contracts and academic grants rather than as an absolute value). Rather than a deviation from the identity of an academic scientist, a relative measure may capture the time and effort candidates devote to industry-oriented research at the expense of their core professional activity, suggesting a different mechanism from identity nonconformance. To test this possibility, we compute further iterations of our models with an alternative specification of the explanatory variable industry evaluation, operationalized as a proportion of industry contracts relative to the total number of awards. We find the level of significance of our explanatory variables to be lower or absent with this alternative specification. We interpret this finding as discounting the time and effort argument and further supporting the identity nonconformance mechanism: contrary to other cases documented in the literature (e.g., products, as in Negro, Hannan, & Rao [2011]), scientists have a default identity (an academic one), and peer evaluators interpret each additional unit of industry evaluation as a deviation from that identity.
Finally, in further analyses, we replicate our findings with a linear GMM estimation developed by Arellano and Bond (1991), using the logarithmic transformation of the monetary amount of research grants awarded to each scientist in a given year as a dependent variable.5 Compared to our main specification, this procedure allows us to include individual fixed effects and thus control for (invariant) unobserved heterogeneity among scientists (see Table A2). The correlation between this continuous measure of peer evaluation and the count measure used in our main models is just above 0.7. We employ a system GMM estimator (Arellano & Bover, 1995; Blundell & Bond, 1998), rather than a difference GMM estimator (Arellano & Bond, 1991), as the former allows for more instruments, thus improving efficiency. We treat industry evaluation, its squared term, previous peer evaluation, and the interaction terms as endogenous. All the remaining variables, except year controls and departmental affiliation, are considered as predetermined. Robust standard errors, clustered by individual, are included in all specifications. We assess the validity of the GMM estimator and the three selected standard instruments via first- and second-order Arellano-Bond autocorrelation tests, and the Hansen test of overidentification restrictions to verify the exogeneity of each instrument considered in isolation. The results of the linear models are similar to the main specification, adding support to our hypotheses.
Results of Coarsened Exact Matching Analysis
Table 3 shows regression results for the matched sample analysis (see Table A3 for the descriptive statistics). We find a statistically significant inverted U-shaped effect of industry evaluation on peer evaluation in the baseline model (Model 12). We also find confirmation for the moderating effects of identity proximity, quality and irregularity of publishing records (Models 13 to 16). For robustness, we create a number of additional matched pair samples, trading the number of matched individuals against the coarseness of our matching intervals. Regressions conducted on these samples are in line with the above results. In all, the coarsened exact matching analysis confirms the results of the full sample panel dataset analysis, adding confidence to our findings.
DV = Peer evaluation | Model 9 | Model 10 | Model 11 | Model 12 | Model 13 | Model 14 | Model 15 | Model 16 |
---|---|---|---|---|---|---|---|---|
Inverse Mills ratio | 0.065 | 0.084 | 0.082 | 0.173 | 0.107 | 0.179 | ||
(.281) | (.282) | (.281) | (.276) | (.277) | (.274) | |||
D: Grant proposal filed [1 & 2] | −0.404*** | −0.400*** | −0.406*** | −0.403*** | −0.403*** | −0.388*** | −0.400*** | −0.392*** |
(.058) | (.058) | (.059) | (.059) | (.060) | (.058) | (.059) | (.059) | |
D: Grant proposal filed [8 to 14] | 0.180** | 0.172* | 0.181** | 0.173** | 0.169* | 0.197** | 0.176** | 0.190** |
(.067) | (.067) | (.067) | (.067) | (.067) | (.066) | (.067) | (.066) | |
D: Grant proposal filed [15 and above] | 0.186 | 0.185 | 0.184 | 0.182 | 0.179 | 0.204 | 0.178 | 0.17 |
(.174) | (.174) | (.175) | (.175) | (.176) | (.175) | (.176) | (.178) | |
Academic age | −0.002 | −0.002 | −0.002 | −0.002 | −0.002 | −0.003 | −0.003 | −0.003 |
(.004) | (.004) | (.004) | (.004) | (.004) | (.004) | (.004) | (.004) | |
Tenure | −0.016*** | −0.016*** | −0.016*** | −0.016*** | −0.017*** | −0.017*** | −0.015** | −0.017*** |
(.005) | (.005) | (.005) | (.005) | (.005) | (.005) | (.005) | (.005) | |
Irregularity of publishing record | −0.718* | −0.721* | −0.722* | −0.726* | −0.708* | −0.643* | −0.991** | −0.782* |
(.286) | (.287) | (.291) | (.292) | (.291) | (.289) | (.333) | (.331) | |
Quality of publishing record (ln) | 0.107** | 0.104** | 0.110** | 0.107** | 0.109** | 0.161*** | 0.109** | 0.157*** |
(.036) | (.036) | (.038) | (.038) | (.038) | (.038) | (.038) | (.039) | |
Patents | 0.041 | 0.035 | 0.04 | 0.035 | 0.037 | 0.039 | 0.035 | 0.042† |
(.026) | (.025) | (.026) | (.025) | (.026) | (.025) | (.025) | (.025) | |
Industry experience | −0.076** | −0.082** | −0.075** | −0.082** | −0.082** | −0.077** | −0.088** | −0.082** |
(.028) | (.029) | (.029) | (.029) | (.029) | (.028) | (.028) | (.028) | |
Identity proximity with industry | 0.245† | 0.261* | 0.245† | 0.262* | 0.361* | 0.269* | 0.275* | 0.339* |
(.132) | (.132) | (.132) | (.132) | (.163) | (.133) | (.129) | (.162) | |
Team quality | 0.004 | 0.005 | 0.004 | 0.004 | 0.004 | 0.004 | 0.005 | 0.003 |
(.004) | (.003) | (.003) | (.003) | (.003) | (.003) | (.003) | (.003) | |
Previous peer evaluation | 0.028*** | 0.026*** | 0.029*** | 0.027*** | 0.027*** | 0.027*** | 0.026*** | 0.027*** |
(.007) | (.007) | (.007) | (.008) | (.008) | (.007) | (.008) | (.007) | |
Industry evaluation | 0.005 | 0.040* | 0.005 | 0.040* | 0.222* | 0.293*** | 0.032* | 0.411** |
(.006) | (.016) | (.006) | (.016) | (.099) | (.084) | (.016) | (.127) | |
Industry evaluation × industry evaluation | −0.002* | −0.002* | −0.038** | −0.012** | −0.001† | −0.045** | ||
(.001) | (.001) | (.014) | (.004) | (.001) | (.014) | |||
Industry evaluation × identity proximity with industry | −0.181† | −0.155 | ||||||
(.099) | (.098) | |||||||
Industry evaluation × industry evaluation × identity proximity with industry | 0.036** | 0.035* | ||||||
(.014) | (.014) | |||||||
Industry valuation × quality of publishing record | −0.049** | −0.042** | ||||||
(.016) | (.015) | |||||||
Industry valuation × industry valuation × quality of publishing record | 0.002** | 0.002** | ||||||
(.001) | (.001) | |||||||
Industry valuation × irregularity of publishing record | 0.687** | 0.447* | ||||||
(.235) | (.210) | |||||||
Industry valuation × industry valuation × irregularity of publishing record | −0.062** | −0.044** | ||||||
(.019) | (.014) | |||||||
Department fixed effects | Included | Included | Included | Included | Included | Included | Included | Included |
Constant | −0.444* | −0.454* | −0.483† | −0.505* | −0.608* | −0.803** | −0.530* | −0.859** |
(.192) | (.192) | (.253) | (.251) | (.264) | (.256) | (.249) | (.267) | |
Number of observations | 3,091 | 3,091 | 3,091 | 3,091 | 3,091 | 3,091 | 3,091 | 3,091 |
Number of individuals | 820 | 820 | 820 | 820 | 820 | 820 | 820 | 820 |
Pairs | 410 | 410 | 410 | 410 | 410 | 410 | 410 | 410 |
Endogenous variables | 2 | 3 | 2 | 3 | 5 | 5 | 5 | 9 |
Instruments (standard) | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
Instruments (lagged variables) | 3 | 4 | 3 | 4 | 6 | 6 | 6 | 12 |
Hansen’s J chi-square | 0.81 | 0.73 | 0.82 | 0.75 | 0.68 | 0.82 | 0.79 | 1.31 |
Hansen’s J df | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 6 |
Hansen’s J p | 0.93 | 0.94 | 0.93 | 0.94 | 0.95 | 0.93 | 0.93 | 0.97 |
Interpretation of Results via Interviews
Finally, we interviewed peer reviewers in order to further elucidate and interpret our results. We first focus on the main curvilinear effect hypothesized in this study. Statements from our informants suggest that—without being specifically prompted—they considered both of our two postulated mechanisms as relevant when evaluating a grant applicant. On the one hand, they considered a candidate’s valuation by industry as signaling generic ability, or, in the words of one respondent, the ability “to execute.” A professor of immunology stated: “In principle it would be a positive if I see somebody who has standing in industry. (...) It would [give me] information on the capacity to deliver and take things to the end. It would signal the generic capability of the candidate.” A professor of infectious disease said: “Having been given contracts shows that they have vision and can implement a vision,” and a medicine professor stated: “I would look at it [being well regarded in industry] favorably because it shows the expertise and experience to see through this program [a grant project].”
Simultaneously, informants suggested that high levels of industry evaluation were likely to trigger negative evaluations. Importantly, the answers suggest that this negative effect was due to reasons other than imputed ability, rather highlighting identity conformance concerns, in line with our conjecture of two interacting mechanisms. A professor in molecular biology said: “If somebody had a very large amount of industry engagement, I would see that as extremely strange. It would make we wonder what is this person is really about. I would see it as a negative thing, yes.” A professor of astronomy stated: “If they have an excessive involvement with industry, then I would probably think they won’t be able to execute because they have a different outlook.” A professor of materials science said: “High industry recognition is not necessarily a bad thing; it means they can successfully deliver something. But if they are excessively exposed to industry, then the question becomes [whether] they are really into something else [i.e., not academic objectives].” These respondents all reiterated that this negative effect would not apply to a “normal” or “reasonable” amount of industry engagement but only when it became “excessive;” this provides support for our conjecture that lower and moderate levels of industry engagement have little or no negative effects regarding identity conformance. Overall, our interview evidence provides additional support for our hypothesis posing a curvilinear effect as a result of two interacting effects, a positive managerial ability effect, and a negative identity conformance effect.
We also explored how industry proximity of the candidate’s field and their academic quality influenced respondents’ evaluation process. Regarding the first aspect, by design, no respondent was able to compare candidates across the whole spectrum of industry proximity and could only speak for their discipline. For instance, the pure mathematician we spoke to was unable to comment on evaluating candidates who were close to industry. As an imperfect remedy, therefore, we sought to address this question in a way that allowed for within-person variation with respect to the industry proximity of evaluated candidates. We did this by asking informants whether different policies by grant bodies with respect to the industry orientation of grant proposals mattered to their evaluation of previous industry recognition. A medicine professor said: “Obviously I would look at previous industry recognition more favorably if the grant is for a large translational research program where you ultimately have to sell a product.” A professor of genetics echoed this view: “It depends on the funding that they are going for. If it’s a translational project it [previous industry evaluation] might be helpful.” The respondents also implied that for more applied projects applicants were allowed to exhibit high levels of previous industry recognition without incurring an identity penalty. We can infer from respondents’ situated accounts that candidates for more industry-proximate grants gained more in terms of their imputed ability from previous industry recognition and were simultaneously penalized less for the concurrent identity deviation. Overall, these insights provide qualitative support for the conjecture that the curvilinear main effect will be attenuated in situations of higher industry proximity, and also that a higher degree of industry evaluation would be perceived as tolerable, compared to lower industry proximity.
We also explored how scientists’ standing within academia influences peers’ evaluation. With respect to the ability component of industry evaluations, informants noted that having an industry track record mattered less for academics with impressive academic credentials, with the latter overriding the former. An aerodynamics professor stated: “If somebody has the right publications, previous grants and so on, then it wouldn’t matter how much they engage with industry.” We were also able to garner statements on the identity component of industry valuations. The molecular biology professor quoted above suggested that whether high recognition from industry would influence her judgment was conditional upon the academic track record of a grant candidate: “I would obviously look at all the rest [academic credentials], but if I don’t see all the rest then if I see this extreme engagement with industry then I won’t be pleased really.” A statistics professor mentioned: “If they have too much industry involvement, well then they need to show something that tells me that they have the academic side covered. I need some evidence to see whether they are really up to it academically speaking.” These quotes illustrate that, first, respondents relied less on industry engagement as an indicator of ability for candidates with higher academic quality, and, second, high evaluations from industry were less critically regarded when reviewers had evidence suggesting good academic track records for the candidates. Overall, this suggests an attenuation of the curvilinear effect for academically more successful candidates, as reflected in our findings.
Overall, quantitative and qualitative empirical findings support the view that scientists’ evaluation by the peer audience has an inverted U-shape relationship with their evaluation by the external (industry) audience. Further, we find evidence that the relationship is attenuated by scientists’ social proximity to industry, at the discipline level, and their academic quality. These results are robust with respect to selection, endogeneity, and changes in specification, and confirmed by qualitative insights.
DISCUSSION
The goal of this study was to explore the social valuation of organizational members that are subject to multiple audiences. Specifically, we investigate how a peer audience values a candidate when faced with information on the candidates’ evaluation by an external, nonpeer audience. We argue that external audience evaluations represent an exogenous index to peer evaluators that contain two types of information: the candidate’s ability, and the candidate’s conformity with an expected identity. The interplay of both types of information generates an inverse U-shaped relationship between external audience evaluation and peer evaluation. For lower levels of external audience evaluation, each marginal increase benefits the candidate’s evaluation by his or her peers because appreciation by the external audience is read as indicative of ability. Yet, beyond a certain point, marginal increases of an external audience valuation start to have a negative effect on the perceived value of the candidate to the peer audience: high levels of external valuation are read as being indicative of the candidate’s failure to conform to the identity expected within the peer audience. We further find that this curvilinear effect is moderated by the identity proximity between peer audience and external audience, and the availability of informative endogenous indices.
Our study contributes to several bodies of literature. First, we add to the understanding of social valuation processes that condition important organizational outcomes, such as the acquisition of research resources in our setting (Lamont, 2012; Shymko & Roulet, 2017). Prior work has demonstrated the fundamentally subjective nature of valuation, as socially endogenous inferences critically shape value judgments (Salganik & Watts, 2008). Yet, recent work has emphasized mechanisms that limit the gap between endogenously attributed value and objective conditions. In the stock market, for instance, value opportunism and value entrepreneurship mechanisms limit the deviation of stock price from the objective, quality-based value of listed shares (Zuckerman, 2012). A different mechanism was suggested by Card and DellaVigna (2017), who showed that, in an apparent attempt at affirmative action, reviewers of manuscripts submitted to economics journal favor those whose research is less cited.
In this paper, we highlight another mechanism limiting the self-fulfilling nature of socially endogenous inferences—that is, the heterogeneity of the audiences that a candidate faces. We find that prior (positive) value judgments do not always result in a positive, reinforcing influence on evaluation outcomes. This is the case when these evaluations represent exogenous indices; that is, when they are provided by an audience external to a candidate’s core (peer) audience. The novel theoretical insight that we propose herein is that an evaluator’s judgments carry information not only about the imputed ability of the candidates, but also on their fit with the prototypical identity template held by the evaluating audience. Extant work on valuation, including studies of reputation and status (George, Dahlander, Graffin, & Sim, 2016), has mostly regarded the indices attributed to a candidate as measures of ability, while the second dimension—an index attributed to a candidate also containing information about their identity—has been overlooked. An Academy Award, for instance, may increase the general perception that a film professional is valuable, but also signals their proximity to a specific audience identity (peers) and distance from other audiences (critics) (Cattani et al., 2014). The latter effect will have increasing impact on evaluation outcomes as the difference increases between peer audience and external audience with respect to the identity the peers expect from a legitimate candidate. Our focus on the identity dimension contrasts with previous work that has underlined cognitive similarity as a determining feature of proximity (Onal Vural, Dahlander, & George 2013). Accounting for the identity dimension of audience valuation, and its tempering effect on socially endogenous inferences, brings more nuance to the understanding of social valuation, and underlines a limit to the self-reinforcing nature of socially endogenous inferences.
More broadly, our study complements prior work on the effects on social valuation of actors’ categorical boundary straddling (Hsu, Hannan, & Koçak, 2009; Kovács & Johnson, 2014; Vergne, 2012). Our findings resonate in the sense that crossing boundaries will invite audience disapproval (found in most of the literature), or approval when a category is stigmatized (e.g., Vergne, 2012). Our study complements this line of work by showing that the effect of straddling categories (in our case, academia and industry) need not be linear, but can be positive, up to a certain degree. This is because here the evaluating audience does not merely consider how an actor (or product) is engaged or associated with different categories, but will also account for the prior evaluations made by category-specific audiences. For instance, a country music audience may not devalue a country artist appearing from time to time in pop charts, confirming the prowess of the artist, unless he or she repeatedly does so and becomes regarded as a pop artist, thus deviating from the expected identity of a country artist. Awareness that a candidate has been positively evaluated by an external audience thus provides a more specific signal about the candidate’s ability (apart from identity) than the mere association with another category, and this in turn is responsible for the initially positive effect.
Second, we provide novel insights into the study of social approval assets, which are a subcategory of intangible assets that derive their value from favorable perceptions (Pfarrer, Pollock, & Rindova, 2010). Evaluation by an external audience is a social approval asset (Bundy & Pfarrer, 2015), because it is based on the aggregated favorable perceptions of audience members, as perceived by a peer audience member, or indeed by a member of any audience present. Like other social approval assets, including reputation, status, or legitimacy, previous audience evaluation—or appreciation—influences audiences’ perceptions and increases their willingness to engage in exchange with a candidate (George et al., 2016; Pfeffer & Salancik, 1978; Rindova, Pollock, & Hayward, 2006). Further, evaluation is an asset that is difficult and time-consuming to build, and cannot be acquired at will via market transactions (Barney, 1991). Once acquired, it becomes a self-reinforcing “social fact” (Pfarrer, Pollock, & Rindova, 2010), as it benefits actors that are already appreciated, and supports them in their continued quest to acquire resources and access economic opportunities.
Our study contributes to this literature by examining the property of evaluations provided by multiple audiences. Authors have already pointed out that actors commonly rely on resource flows from more than one audience (Pontikes, 2012), that candidates are evaluated differently by different salient audiences (Cattani et al., 2014), and that candidates’ success with specific audiences is driven by the reputation they have with them (Ertug et al., 2016). However, how evaluations by multiple audiences might interact has been less explored. Our findings reveal that appreciation from an external audience represents an “asset” only to a limited extent before becoming a “liability.” Our main result, the curvilinear effect of external audience evaluation on the evaluation given by a peer audience, has implications for how a candidate, whether organization or individual, should approach a new audience. The situation may, for instance, apply to an online seller primarily addressing consumers who considers approaching business customers as an alternate audience, without wanting to damage its prospects with their core peer audience. Our findings imply that there is an optimal degree of external audience appreciation that is set at an intermediate level between being not at all valued by an external audience, and being valued too highly. This optimal degree is highly actor specific, depending on how close the new audience is to the original audience and on how good the standing of the actors is with that latter audience; for highly proximate and highly peer-esteemed actors the degree of external appreciation will also lose relevance as the curvilinear effect loses its incline. These insights have increasing relevance in a techno-economic context, where more and more information about how candidates are valued is available to audience members carrying out valuations (Lanzolla & Frankort, 2016; Orlikowski & Scott, 2013).
Third, our study contributes to institutional theory, and specifically to the study of institutional logics (Thornton, Ocasio, & Lounsbury, 2012). It constitutes an application of the institutional logics perspective to the problem of valuation in candidate–audience relationships. The concept of institutional logics is apt for conceptualizing how collectives of social actors—audiences—decide what is valuable. Moreover, as previous research has suggested, different collectives of actors tend to adhere to different logics (Reay & Hinings, 2009)—that is, the principles that order the actors’ reality and define what is seen as worthwhile and successful (Thornton, 2004). In this way, the value system underpinning an institutional logic is the basis for generating yardsticks by which to judge other actors. Our study gives insight into the process by which a collective of actors adhering to one logic conducts collective valuations of the members of a different logic. In this respect, we propose that contradictions between different logics do influence the valuation of actors, but also that this is contingent upon the degree to which actors’ behavior is perceptibly in line with the valuation framework implicit in each logic. For an actor, while moderately cohering to the valuation framework of an alternate logic is likely to be beneficial, an excessive conformity to performance expectations of another logic will damage their standing within their own constituency.
Finally, our study relates to a body of previous work on academic science, including the allocation of research funding in science (Freeman & Van Reenen, 2009), and particularly evaluation processes therein. Studying the awarding of National Institutes of Health grants, Li and Agha (2015) found that peer reviewers generally perform well in discerning the quality of proposed studies. Li (2017) observed that evaluators are better informed, but also more biased, about the quality of projects in their own area, with the benefits of expertise slightly outweighing the cost of bias. By contrast, Boudreau, Guinan, Lakhani, and Riedl (2016) found, albeit in a different context, that intellectual distance has a positive impact on an evaluator’s assessment of the quality of a proposal. While these diverging results on the influence of reviewer–applicant expertise overlap call for further research, our study addresses a different influence on funding decisions—that is, an applicant’s standing with an audience that peer reviewers regard as extraneous to their community.
The abovementioned previous work, along with other related studies, has relied on private information or (quasi) experimental setups to address endogeneity and establish causality. For instance, Jacob and Lefgren (2011) investigated the effects of grant acquisition of scientific production using a regression discontinuity design, and Azoulay et al. (2014) used information on status shocks to detect status effects on the citations of academics’ work, in combination with a matched sample approach. Some of this work has deployed reviewer scores as a measure of valuation (Jacob & Lefgren, 2011; Li, 2017), or conducted experiments to randomize the assignment of evaluators and proposals (Boudreau et al., 2016). With all its limitations, our empirical approach—using panel data and matched-sample analyses—is applicable to a wide range of cases where reviewer scores are not formalized (e.g., industry contracts) or publically disclosed (e.g., academic grants), and there is no opportunity for experimental or quasi-experimental design. In our setting, for instance, reviewer scores for industry contracts are generally unavailable, as the latter are awarded privately by corporations, using a variety of approaches. In addition, reviewer scores tend to be available only for single funders, while our measure (grant conferral to individuals) captures grant acquisition success across a wide range of science funders and allows us to include a far wider range of disciplines, and hence industry proximity, compared to other studies.
CONCLUSION
Boundary Conditions and Implications
We explore valuation in the specific setting of public science, which raises the question of generalizability. An important consideration concerns the orthogonality of the valuation criteria used by a peer audience and an external audience, respectively. Our framework allows for variation in identity overlap, though it is unlikely to apply in more extreme cases where the identities expected by audiences differ more dramatically. When the distance between the two audiences becomes too large, one will expect not only the indices of identity to diverge but also the indices of ability. In this case, a peer audience is likely to disregard information on external audience evaluations as indicators of ability, or even reach the opposite conclusion, as in the case of stigmatized external audiences. Such situations may, however, be rare, as (strategic) candidates can be expected to address audiences that see some value in them, while they will avoid addressing diagonally opposed audiences. A successful bank robber is unlikely to apply to become a banker. Often, there is a degree of interdependence between the valuation criteria deployed by two audiences. An evaluation provided by the external audience will then represent a reasonable index of ability to the peer audience, meaning that it would affect the peer audience’s appreciation of a candidate in the way we argued in this paper. Overall, then, the presence of a moderate degree of overlap between two audiences’ ability criteria constitutes a boundary condition for the predictive framework we devised—a condition that is likely to be met in most settings.
Furthermore, our study has remained agnostic in regard to the drivers of observable external audience valuations used as informational input by the peer audience, and it is likely that they are informed by intangible attributes including reputation and status. This poses the following questions: To what degree will these intangible attributes additionally or independently affect evaluation by the peer audience? How does the reputation and status of candidates specific to the peer audience affect the relationship between external audience valuation and peer audience valuation? Existing literature has suggested that high-status actors attract attention and higher scrutiny (Graffin, Bundy, Porac, Wade, & Quinn, 2013), which may increase the valuation penalty for appreciation by an external audience, yet afford some latitude to deviate from expectations (Phillips & Zuckerman, 2001). For example, Laurence Tribe, a Harvard Law School professor, was called a “sellout” and a traitor by his peers for representing a coal company (Wu, 2015), illustrating that high-status actors are not immune to the negative effect of involvement with an external audience. Future research may explore these issues further.
Implications for University–Industry Relations and Management Practice
Our study suggests novel insights for the theory and practice of university–industry relations and science commercialization. Considerable effort has been made to identify the effects of academic engagement with industry and involvement in commercialization on academics’ scientific production (Perkmann et al., 2013). Many studies have suggested a positive relationship between engagement and individual scientific production (Blumenthal, Campbell, Causino, & Louis, 1996; Gulbrandsen & Smeby, 2005), but some have suggested an inverted U-shaped relationship (Banal-Estañol, Jofre-Bonet, & Lawson, 2015; Lin & Bozeman, 2006), or an inverse relationship (Goldfarb, 2008). Contrary to this work, our study focuses on the input side (research funding) rather than the output side (publications) of academic research. Our findings suggest that, for the majority of academic scientists, raising funding from an external audience—for example, industry—will result in positive valuation effects on the part of their peers. Only those bidding for industry funds—and by implication conducting application-oriented research—beyond a certain level will risk “devaluation” in the eyes of their peers and subsequently lower their chances of raising public research funds. Because it relates to how research is planned and anticipated, the trade-off at the funding stage takes chronological precedence to the possible trade-offs related to conducting application-oriented research as opposed to curiosity-driven research. Moreover, the mechanism that we postulate differs radically from the mechanism presumed in previous work to engender the possible disadvantages of engagement with industry (Goldfarb, 2008). Instead of potentially being hampered by the pressure to focus on applied research rather than curiosity-driven research, the effect we postulate is related to scientists’ image and perception in their community, as being seen to engage too much is sufficient to trigger penalties even before the research is started. In this sense, our mechanism may trigger a path for individual researchers that may lead them to have funding success with public grants (and more with industry contracts) that may have subsequent repercussions for their research output, as observed by previous research.
In practical terms, our study suggests that universities should encourage faculty engagement with external audiences—as, for most individuals, the effect is beneficial—though universities still need to take into account the close-knit structure of peer communities in science. Notably, our results suggest that junior researchers’ peer valuation may be particularly vulnerable to the effect we found because their endogenous indices have lower values than those of seniors (i.e., they lack an established publication record). Furthermore, university managers need to reflect on ways in which they can reduce the identity penalty for those who work frequently with industry, for instance by providing additional endogenous indices, in addition to publication records, that can alleviate peer audiences’ identity-related doubts. Beyond universities, our work has analogous implications for the management of organizations operating under the scrutiny of several audiences, or diversifying their activities to target new audiences. Our findings call for a cautious approach and highlight a critical trade-off. While, initially, engaging a new audience brings net benefits, at higher levels of engagement there will be a growing risk of undermining how an organization is evaluated by its traditional audience.
More generally, the study of social valuation in organizational contexts appears promising. Valuation processes are ubiquitous and part of the daily life of organizations; organizations constantly engage in valuation—for instance, in hiring decisions, individual and team performance assessments, and supplier screening. In addition, organizations are subject to external valuation by different audiences—for example, shareholders, clients, and regulators. By shedding light on the interdependencies between the social valuation carried out by different audiences, this study reveals critical mechanisms that affect the function and survival of organizations.
1 Following prior works (e.g., Zuckerman, 2012), we use the term valuation to denote the process through which an audience evaluates candidates and evaluation to refer to the outcome of a valuation process (George, Dahlander, Graffin, & Sim 2016).
2 We thank an anonymous reviewer for suggesting this label.
3 A boundary condition, met in many cases, is that the two audiences do not rely on orthogonal ability criteria; see our discussion section.
4 The matching method offers an alternative to randomized trial experiments (e.g., Boudreau, Guinan, Lakhani, & Rield, 2016), but is not applicable in our case due to the nature of the data on industry evaluation.
5 We choose not to log-transform our main dependent variable (i.e., number of grants awarded in time t) because several authors have advised against log-transforming count variables due to issues related to the treatment of zero observations (e.g., O’Hara & Kotze, 2010).
Acknowledgments
We are grateful to the following individuals for kindly commenting on previous drafts: Guilhem Bascle, Michaël Bikard, Paola Criscuolo, Simone Ferriani, Mark Kennedy, Yuri Mishina, Paul Nightingale, Sarah Otner, Thomas Roulet, Anne ter Wal, and Sara Valentini. We appreciate the research assistance of Maria Vittoria Amaduzzi, Antonella Bedini, Caterina Bissoni, Adele Gori, Andrea Maisto, and Cleo Silvestri. Special thanks are due to Okan Kibaroglu. We are grateful to Robert J. W. Tijssen for providing us with data on university–industry copublications collected by the Centre for Science and Technology Studies at Leiden University (www.cwts.nl). Previous versions were presented at the Academy of Management Meeting 2016 in Anaheim and the EGOS Colloquium 2016 in Naples. Funding was provided by the Engineering and Physical Sciences Research Council (EP/F036930/1), the Economic and Social Research Council (RES-331-27-0063), and the European Commission (FP7-PEOPLE-2009-IEF–252018). All three authors contributed equally to this work and are listed in alphabetical order.
REFERENCES
- 2003. Semiparametric instrumental variable estimation of treatment response models. Journal of Econometrics, 113: 231–263. Google Scholar
- 2013. Best-practice recommendations for defining, identifying, and handling outliers. Organizational Research Methods, 16: 270–301. Google Scholar
- 1991. Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. Review of Economic Studies, 58: 277–297. Google Scholar
- 1995. Another look at the instrumental variable estimation of error-components models. Journal of Econometrics, 68: 29–51. Google Scholar
- 1956. Studies of independence and conformity: A minority of one against a unanimous majority. Psychological Monographs, 70: 1–70. Google Scholar
- 2009. The impact of academic patenting on the rate, quality, and direction of (public) research output. Journal of Industrial Economics, 57: 637–676. Google Scholar
- 2006. PublicationHarvester: An open-source software tool for science policy research. Research Policy, 35: 970–974. Google Scholar
- 2014. Matthew: Effect or fable? Management Science, 60: 92–109. Google Scholar
- 2015. The double-edged sword of industry collaboration: Evidence from engineering academics in the UK. Research Policy, 44: 1160–1175. Google Scholar
- 1992. A simple model of herd behavior. Quarterly Journal of Economics, 107: 797–817. Google Scholar
- 1991. Firm resources and sustained competitive advantage. Journal of Management, 17: 99–120. Google Scholar
- 2008. Controlling for endogeneity with instrumental variables in strategic management research. Strategic Organization, 6: 285–327. Google Scholar
- 2009. CEM: Coarsened exact matching in Stata. Stata Journal, 9: 524–546. Google Scholar
- 2008. Why database marketing? In R. Blattberg, B.D. Kim, & S. A. Neslin (Eds.), Database marketing: 13–46. New York, NY: Springer. Google Scholar
- 1996. Participation of life-science faculty in research relationships with industry. New England Journal of Medicine, 335: 1734–1739. Google Scholar
- 1998. Initial conditions and moment restrictions in dynamic panel data models. Journal of Econometrics, 87: 115–143. Google Scholar
- 2016. Looking across and looking beyond the knowledge frontier: Intellectual distance, novelty, and resource allocation in science. Management Science, 62: 2765–2783. Google Scholar
- 2012. Testing moderating hypotheses in limited dependent variable and other nonlinear models: Secondary versus total interactions. Journal of Management, 38: 860–889. Google Scholar
- 2007. Impacts of grants and contracts on academic researchers’ interactions with industry. Research Policy, 36: 694–707. Google Scholar
- 2015. A burden of responsibility: The role of social approval at the onset of a crisis. Academy of Management Review, 40: 345–369.Link , Google Scholar
- 2017. What do editors maximize? Evidence from four leading economics journals. NBER Working Paper No. 23282. Google Scholar
- 2000. Why the microbrewery movement? Organizational dynamics of resource partitioning in the US brewing industry. American Journal of Sociology, 106: 715–762. Google Scholar
- 2014. Insiders, outsiders, and the struggle for consecration in cultural fields: A core-periphery perspective. American Sociological Review, 79: 258–281. Google Scholar
- 2016. Sample selection bias and Heckman models in strategic management research. Strategic Management Journal, 37: 2639–2657. Google Scholar
- 1994. Grants peer review in theory and practice. Evaluation Review, 18: 20–30. Google Scholar
- 2004. Social influence: Compliance and conformity. Annual Review of Psychology, 55: 591–621. Google Scholar
- 1998. Social influence: Social norms, conformity and compliance. In G. LindzeyD. GilbertS. T. Fiske (Eds.), The handbook of social psychology: 151–192. New York, NY: McGraw-Hill. Google Scholar
- 2002. Links and impacts: The influence of public research on industrial R&D. Management Science, 48: 1–23. Google Scholar
- 2014. Research design: Qualitative, quantitative, and mixed methods approaches. London, U.K.: Sage. Google Scholar
- 2017. Evaluating novelty: The role of panels in the selection of R&D projects. Academy of Management Journal, 60: 433–460.Link , Google Scholar
- 1994. Toward a new economics of science. Research Policy, 23: 487–521. Google Scholar
- 2011. Why do academics engage with industry? The entrepreneurial university and individual motivations. Journal of Technology Transfer, 36: 316–339. Google Scholar
- 2012. Venturing into new territory: Career experiences of corporate venture capital managers and practice variation. Academy of Management Journal, 55: 563–583.Link , Google Scholar
- 2013. A market mediation strategy: How social movements seek to change firms’ practices by promoting new principles of product valuation. Organization Studies, 34: 683–703. Google Scholar
- 2015. Logic combination and performance across occupational communities: The case of French film directors. Journal of Business Research, 69: 2371–2379. Google Scholar
- 2015. Who shall get more? How intangible assets and aspiration levels affect the valuation of resource providers. Strategic Organization, 13: 6–31. Google Scholar
- 2016. The art of representation: How audience-specific reputations affect success in the contemporary art field. Academy of Management Journal, 59: 113–134.Link , Google Scholar
Eurostat . 2015. Eurostat UK data. Retrieved from https://ec.europa.eu/eurostat. Accessed June 27, 2015. Google Scholar- 1990. What’s in a name? Reputation building and corporate strategy. Academy of Management Journal, 33: 233–258.Link , Google Scholar
- 2009. What if Congress doubled R&D spending on the physical sciences? In J. LernerS. Stern (Eds.), Innovation policy and the economy: 1–38. Chicago, IL: University of Chicago Press. Google Scholar
- 2013. The institutional logics perspective: A new approach to culture, structure, and process. M@n@gement, 15: 583–595. Google Scholar
- 2016. Reputation and status: Expanding the role of social evaluations in management research. Academy of Management Journal, 59: 1–13.Link , Google Scholar
- 2010. Forging an identity: An insider-outsider study of processes involved in the formation of organizational identity. Administrative Science Quarterly, 55: 1–46. Google Scholar
- 2015. Marks of distinction: Framing and audience appreciation in the context of investment advice. Administrative Science Quarterly, 60: 333–367. Google Scholar
- 2000. When cymbals become symbols: Conflict over organizational identity within a symphony orchestra. Organization Science, 11: 285–298. Google Scholar
- 2008. Beyond constraint: How institutions enable identities. In R. GreenwoodC. OliverK. SahlinR. Suddaby (Eds.), The SAGE handbook of organizational institutionalism: 413–430. Los Angeles, CA: Sage. Google Scholar
- 2005. From the critics’ corner: Logic blending, discursive change and authenticity in a cultural production system. Journal of Management Studies, 42: 1031–1055. Google Scholar
- 2008. The effect of government contracting on academic research: Does the source of funding affect scientific output. Research Policy, 37: 41–58. Google Scholar
- 1996. Matrix computations. Baltimore, MD: Johns Hopkins University Press. Google Scholar
- 2013. Falls from grace and the hazards of high status: The 2009 British MP expense scandal and its impact on parliamentary elites. Administrative Science Quarterly, 58: 313–345. Google Scholar
- 2013. Escaping the prior knowledge corridor: What shapes the number and variety of market opportunities identified before market entry of technology start-ups? Organization Science, 24: 280–300. Google Scholar
- 2005. Industry funding and university professors’ research performance. Research Policy, 34: 932–950. Google Scholar
- 2016. Thinking about U: Theorizing and testing U‐and inverted U‐shaped relationships in strategy research. Strategic Management Journal, 37: 1177–1195. Google Scholar
- 1979. Sample selection bias as a specification error. Econometrica, 47: 153–161. Google Scholar
- 2009. Multiple category memberships in markets: An integrative theory and two empirical tests. American Sociological Review, 74: 150–169. Google Scholar
- 2012. Evaluative schemas and the mediating role of critics Organization Science, 23: 83–97. Google Scholar
- 2013. The dual funding structure for research in the UK: Research council and funding council allocation methods and the pathways to impact of UK academics. Cambridge, U.K.: U.K. Innovation Research Centre. Google Scholar
- 2012. Causal inference without balance checking: Coarsened exact matching. Political Analysis, 20: 1–24. Google Scholar
- 2011. The impact of research grant funding on scientific productivity. Journal of Public Economics, 95: 1168–1177. Google Scholar
- 2009. Academics or entrepreneurs? Investigating role identity modification of university scientists involved in commercialization activity. Research Policy, 38: 922–935. Google Scholar
- 2017. The price of admission: Organizational deference as strategic behavior. American Journal of Sociology, 123:232–275. Google Scholar
- 2015. Decoding the adaptability-rigidity puzzle: Evidence from pharmaceutical incumbents’ pursuit of gene therapy and monoclonal antibodies. Academy of Management Journal, 58: 1180–1207.Link , Google Scholar
- 2010. Valuing the unique: The economics of singularities, Princeton, NJ: Princeton University Press. Google Scholar
- 2014. Audience heterogeneity and the effectiveness of market signals: How to overcome liabilities of foreignness in film exports? Academy of Management Journal, 57: 1360–1384.Link , Google Scholar
- 1995. On the sources and significance of interindustry differences in technological opportunities. Research Policy, 24: 185–205. Google Scholar
- 2012. Friends, family, or fools: Entrepreneur experience and its implications for equity distribution and resource mobilization. Journal of Business Venturing, 27: 525–543. Google Scholar
- 2013. Bridging the mutual knowledge gap: Coordination and the commercialization of university science. Academy of Management Journal, 56: 498–524.Link , Google Scholar
- 2014. Contrasting alternative explanations for the consequences of category spanning: A study of restaurant reviews and menus in San Francisco. Strategic Organization, 12: 7–37. Google Scholar
- 2009. How professors think. Cambridge, MA: Harvard University Press. Google Scholar
- 2012. Toward a comparative sociology of valuation and evaluation. Sociology, 38: 201–221. Google Scholar
- 2002. The study of boundaries in the social sciences. Annual Review of Sociology, 28: 167–195. Google Scholar
- 2016. The online shadow of offline signals: Which sellers get contacted in online B2B marketplaces? Academy of Management Journal, 59: 207–231.Link , Google Scholar
- 2006. The “quality myth”: Promoting and hindering conditions for acquiring research funds. Higher Education, 52: 375–403. Google Scholar
- 1996. “Technology transfer” and the research university: A search for the boundaries of university–industry collaboration. Research Policy, 25: 843–863. Google Scholar
- 2017. Expertise vs. bias in evaluation: Evidence from the NIH. American Economic Journal. Applied Economics, 9: 60–92. Google Scholar
- 2015. Big names or big ideas: Do peer-review panels select the best science proposals? Science, 348: 434–438. Google Scholar
- 2006. Researchers’ industry experience and productivity in university–industry research centers: A “scientific and technical human capital” explanation. Journal of Technology Transfer, 31: 269–290. Google Scholar
- 2010. With or without U? The appropriate test for a U shaped relationship. Oxford Bulletin of Economics and Statistics, 72: 109–118. Google Scholar
- 1995. Academic research underlying industrial innovations: Sources, characteristics, and financing. Review of Economics and Statistics, 77: 55–65. Google Scholar
- 1968. The Matthew effect in science. The reward and communication systems of science are considered. Science, 159: 56–63. Google Scholar
- 1973. The sociology of science. Theoretical and empirical investigations. Chicago, IL: University of Chicago Press. Google Scholar
- Mowery, D. C. & Nelson, R. R. (Eds.), 2004. Ivory tower and industrial innovation: University–industry technology transfer before and after the Bayh–Dole Act. Stanford, CA: Stanford University Press. Google Scholar
National Science Foundation . 2015. Statistics. Retrieved from http://www.nsf.gov/statistics/2015/nsf15314. Accessed October 15, 2015. Google Scholar- 2011. Category reinterpretation and defection: Modernism and tradition in Italian winemaking. Organization Science, 22: 1449–1463. Google Scholar
- 2010. Do not log-transform count data. Methods in Ecology and Evolution, 1: 118–122. Google Scholar
- 2013. Collaborative benefits and coordination costs: Learning and capability development in science. Strategic Entrepreneurship Journal, 7: 122–137. Google Scholar
- Organization of Economic Co-operation and Development (
OECD). 2016. Main science and technology indicators. Retrieved from http://www.oecd.org/sti/msti.htm. Accessed January 20, 2017. Google Scholar - 2013. What happens when evaluation goes online? Exploring apparatuses of valuation in the travel sector. Organization Science, 25: 868–891. Google Scholar
- 2001. To patent or not: Faculty decisions and institutional success at technology transfer. Journal of Technology Transfer, 26: 99–114. Google Scholar
- 2011. Maintaining legitimacy: Controversies, orders of worth and public justifications. Journal of Management Studies, 48: 1804–1836. Google Scholar
- in press. Protecting scientists from Gordon Gekko: How organizations use hybrid spaces to engage with multiple institutional logics. Organization Science. Google Scholar
- 2009. The two faces of collaboration: Impacts of university–industry relations on public research. Industrial and Corporate Change, 18: 1033–1065. Google Scholar
- 2013. Academic engagement and commercialization: A review of the literature on university–industry relations. Research Policy, 42: 423–442. Google Scholar
- 2010. A tale of two assets: The effects of firm reputation and celebrity on earnings surprises and investors’ reactions. Academy of Management Journal, 53: 1131–1152.Link , Google Scholar
- 1978. The external control of organizations: A resource dependence perspective. New York, NY: Harper & Row. Google Scholar
- 2001. Middle-status conformity: Theoretical restatement and empirical demonstration in two markets. American Journal of Sociology, 107: 379–429. Google Scholar
- 2005. Status signals: A sociological study of market competition. New York, NY: Princeton University Press. Google Scholar
- 1996. The dynamics of organizational status. Industrial and Corporate Change, 5: 453–471. Google Scholar
- 2013. The competitive implications of certifications: The effects of scientific and regulatory certifications on entries into new technical fields. Academy of Management Journal, 56: 597–627.Link , Google Scholar
- 2008. Market watch: Information and availability cascades among the media and investors in the US IPO market. Academy of Management Journal, 51: 335–358.Link , Google Scholar
- 2012. Two sides of the same coin: How ambiguous classification affects multiple audiences’ evaluations. Administrative Science Quarterly, 57: 81–118. Google Scholar
- 2001. Fool’s gold: Social proof in the initiation and abandonment of coverage by Wall Street analysts. Administrative Science Quarterly, 46: 502–526. Google Scholar
- 2009. Managing the rivalry of competing institutional logics. Organization Studies, 30: 629–652. Google Scholar
- 1999. Constructing competitive advantage: The role of firm–constituent interactions. Strategic Management Journal, 20: 691–710. Google Scholar
- 2006. Celebrity firms: The social construction of market popularity. Academy of Management Review, 31: 50–71.Link , Google Scholar
- 2001. Using propensity scores to help design observational studies: Application to the tobacco litigation. Health Services and Outcomes Research Methodology, 2: 169–188. Google Scholar
- 2006. Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311: 854–856. Google Scholar
- 2008. Leading the herd astray: An experimental study of self-fulfilling prophecies in an artificial cultural market. Social Psychology Quarterly, 71: 338–355. Google Scholar
- 2013. Conflicting logics? A multidimensional view of industrial and academic science. Organization Science, 24: 889–909. Google Scholar
- 2002. Knowledge interactions between universities and industry in Austria: Sectoral patterns and determinants. Research Policy, 31: 303–328. Google Scholar
- 2017. When does Medici hurt DaVinci? Mitigating the signaling effect of extraneous stakeholder relationships in the field of cultural production. Academy of Management Journal, 60: 1307–1338. Google Scholar
- 2003. Commercial knowledge transfers from universities to firms: Improving the effectiveness of university–industry collaboration. Journal of High Technology Management Research, 14: 111–133. Google Scholar
- 2012. From practice to field: A multilevel model of practice-driven institutional change. Academy of Management Journal, 55: 877–904.Link , Google Scholar
- 1973. Job market signaling. Quarterly Journal of Economics, 87: 355–374. Google Scholar
- 2014. Unmixed signals: How reputation and status affect alliance formation. Strategic Management Journal, 35: 512–531. Google Scholar
- 2004. Markets from culture: Institutional logics and organizational decisions in higher education publishing. Stanford, CA: Stanford University Press. Google Scholar
- 2012. The institutional logics perspective. Oxford, U.K.: Oxford University Press. Google Scholar
- 2007. Are there real effects of licensing on academic research? A life cycle view. Journal of Economic Behavior & Organization, 63: 577–598. Google Scholar
- 2012. Co-authored research publications and strategic analysis of public–private collaboration. Research Evaluation, 21: 204–215. Google Scholar
- 2010. Commercializing science: Is there a university “brain drain” from academic entrepreneurship? Management Science, 56: 1599–1614. Google Scholar
- 2011. How does popularity information affect choices? A field experiment. Management Science, 57: 828–842. Google Scholar
U.K. Government . 2013. Science, engineering and technology (SET) statistics. Retrieved from https://www.gov.uk/government/collections/science-engineering-and-technology-statistics. Accessed June 27, 2015. Google Scholar- 2004. Combining entrepreneurial and scientific performance in academia: Towards a compounded and reciprocal Matthew-effect? Research Policy, 33: 425–441. Google Scholar
- 2012 Stigmatized categories and public disapproval of organizations: A mixed-methods study of the global arms industry, 1996–2007. Academy of Management Journal, 55:1027–1052. Google Scholar
- 2005. Status evolution and competition: Theory and evidence. Academy of Management Journal, 48: 282–296.Link , Google Scholar
- 2008. Instrumental variables estimation of the average treatment effect in the correlated random coefficient model. Advances in Econometrics, 21:93–117. Google Scholar
- 2010. Econometric analysis of cross section and panel data. Cambridge, MA: MIT Press. Google Scholar
- 2014. Technology transfer: Industry-funded academic inventions boost innovation. Nature, 507: 297–299. Google Scholar
- 2015. Did Laurence Tribe sell out? New Yorker. Retrieved from http://www.newyorker.com/news/news-desk/did-laurence-tribe-sell-out. Accessed March 11, 2015. Google Scholar
- 2010. The sound of silence: Observational learning in the US kidney market. Marketing Science, 29: 315–335. Google Scholar
- 2005. The institutional logic of occupational prestige ranking. American Journal of Sociology, 111: 90–140. Google Scholar
- 2012. Construction, concentration, and (dis) continuities in social valuations. Annual Review of Sociology, 38: 223–245. Google Scholar
- 2003. The critical trade-off: Identity assignment and box-office success in the feature film industry. Industrial and Corporate Change, 12: 27–67. Google Scholar
- 2003. Robust identities or nonentities? Typecasting in the feature‐film labor market. American Journal of Sociology, 108: 1018–1073. Google Scholar
- 2004. Shrewd, crude or simply deluded? Comovement and the internet stock phenomenon. Industrial and Corporate Change, 13: 171–212. Google Scholar
APPENDIX A ADDITIONAL FIGURES AND TABLES

FIGURE A1 Predictivity of Main Outcome Equation—GMM Poisson
We validate the predictivity of our GMM Poisson estimation using a procedure suggested by Blattberg et al. (2008). We randomly split the 5,131 observations into an in-sample and an out-sample. We then employ two models—(i) a model with all covariates and (ii) a model without industry evaluation squared—and plot the average of estimated values of the dependent variable along a decile distribution. Consistent with our theorizing, a model including the term Industry evaluation squared should: (i) exhibit a good fit with the data (i.e., in both in-sample and out-sample, the dependent variable values should decrease smoothly as the deciles increase) and (ii) exhibit a better fit when compared to a model that excludes the term Industry evaluation squared. Our results confirm these patterns: When the term Industry evaluation squared is included, the values of peer evaluation decrease smoothly along the decile distribution and no differences are recorded in the predictive power of in-sample and out-sample across all deciles (see left panel). Conversely, when the squared term is excluded, the predictive power drops; notably, in the out-sample the DV values in decile 5 are higher than in decile 4 (see right panel). Thus, the predictive power of the model with all covariates (left panel) is higher than of the model without the quadratic term (right panel).
DV = Grant application | Probit |
---|---|
D: Position = senior researcher | 0.580*** |
(.063) | |
D: Position = junior faculty | 1.291*** |
(.035) | |
D: Position = senior faculty | 1.431*** |
(.042) | |
Academic age | −0.002 |
(.002) | |
Tenure | −0.019*** |
(.002) | |
Irregularity of publishing record | −0.449*** |
(.052) | |
Quality of publishing record | 0.178*** |
(.011) | |
Patents | 0.0521** |
(.016) | |
Industry experience | 0.022 |
(.021) | |
Identity proximity with industry | 0.426** |
(.157) | |
Team quality | 0.094*** |
(.005) | |
Number of nonacademic employees in the department | 0.032* |
(.015) | |
Size of the department | −0.091** |
(.030) | |
Year fixed effects | Included |
Faculty fixed effects | Included |
Constant | −0.669** |
(.227) | |
Pseudo R2 | 0.475 |
Log pseudolikelihood | −7,576.74 |
Number of observations | 25,255 |
Number of individuals | 6,865 |
Robust Standard errors in parentheses |
DV = Peer evaluation (£ amount of grants) | Model 1 | Model 2 | Model 3 | Model 4 | Model 5 | Model 6 | Model 7 | Model 8 |
---|---|---|---|---|---|---|---|---|
Inverse Mills ratio | 0.387 | 0.32 | 0.215 | 0.218 | −0.32 | −0.118 | ||
(1.355) | (1.332) | (1.294) | (1.334) | (1.278) | (1.277) | |||
D: Grant proposal filed [1 & 2] | −1.380*** | −1.425*** | −1.564*** | −1.606*** | −1.663*** | −1.658*** | −1.639*** | −1.675*** |
(.202) | (.198) | (.204) | (.201) | (.198) | (.197) | (.194) | (.191) | |
D: Grant proposal filed [8 to 14] | −0.119 | −0.18 | −0.072 | −0.125 | −0.1 | 0.065 | 0.043 | 0.139 |
(.453) | (.448) | (.450) | (.443) | (.428) | (.424) | (.429) | (.407) | |
D: Grant proposal filed [15 and above] | −0.588 | −0.582 | −0.304 | −0.305 | −0.366 | 0.208 | −0.327 | 0.019 |
(1.014) | (.941) | (.982) | (.912) | (.855) | (.887) | (.861) | (.811) | |
Academic age | −0.009 | 0.026 | 0.038 | 0.069 | 0.006 | 0.035 | 0.073 | 0.016 |
(.099) | (.091) | (.095) | (.088) | (.076) | (.078) | (.073) | (.064) | |
Tenure | −0.170† | −0.164† | −0.173† | −0.166* | −0.08 | −0.132† | −0.183** | −0.094 |
(.095) | (.090) | (.089) | (.084) | (.072) | (.075) | (.071) | (.059) | |
Patents | 0.324† | 0.271 | 0.388* | 0.338† | 0.265 | 0.364* | 0.440* | 0.387* |
(.188) | (.187) | (.192) | (.187) | (.182) | (.177) | (.182) | (.171) | |
Industry experience | 0.014 | 0.007 | −0.015 | −0.021 | −0.091 | 0.045 | −0.008 | −0.021 |
(.149) | (.143) | (.135) | (.129) | (.096) | (.119) | (.127) | (.087) | |
Previous peer evaluation | 0.124* | 0.119* | 0.118* | 0.117* | 0.112* | 0.152** | 0.115* | 0.139** |
(.053) | (.051) | (.052) | (.050) | (.045) | (.048) | (.048) | (.043) | |
Industry evaluation | 0.091 | 0.269* | 0.09 | 0.255* | 2.326*** | 1.081** | 1.056 | 2.903*** |
(.084) | (.127) | (.079) | (.123) | (.675) | (.362) | (.664) | (.777) | |
Quality of publishing record | 0.912 | 0.666 | 1.243† | 0.936 | 1.170† | 0.767 | 1.843** | 1.102† |
(.623) | (.599) | (.747) | (.723) | (.668) | (.664) | (.605) | (.619) | |
Irregularity of publishing record | −2.173** | −2.037** | −1.299 | −1.22 | −1.068 | −1.339† | −0.154 | −0.94 |
(.808) | (.767) | (.809) | (.775) | (.752) | (.713) | (.671) | (.668) | |
Identity proximity with industry | −2.777 | −1.646 | −2.487 | −1.544 | −0.636 | −0.432 | 2.12 | 0.423 |
(2.669) | (2.505) | (2.435) | (2.258) | (2.074) | (1.798) | (1.325) | (1.624) | |
Team quality | 0.01 | 0.01 | 0.007 | 0.004 | 0.006 | −0.004 | −0.004 | −0.004 |
(.026) | (.025) | (.026) | (.025) | (.025) | (.024) | (.023) | (.023) | |
Industry evaluation × industry evaluation | −0.012* | −0.011* | −0.221*** | −0.041** | −0.098** | −0.260*** | ||
(.005) | (.005) | (.063) | (.014) | (.038) | (.066) | |||
Industry evaluation × identity proximity with industry | −2.167** | −1.427* | ||||||
(.703) | (.654) | |||||||
Industry evaluation × industry evaluation × identity proximity with industry | 0.214*** | 0.150* | ||||||
(.063) | (.061) | |||||||
Industry evaluation × quality of publishing record | −0.543* | −0.432* | ||||||
(.212) | (.180) | |||||||
Industry evaluation × industry evaluation × quality of publishing record | 0.022* | 0.020** | ||||||
(.009) | (.007) | |||||||
Industry evaluation × irregularity of publishing record | 0.857 | 0.619 | ||||||
(.599) | (.585) | |||||||
Industry evaluation × industry evaluation × irregularity of publishing record | −0.084* | −0.069* | ||||||
(.034) | (.031) | |||||||
Department fixed effects | Included | Included | Included | Included | Included | Included | Included | Included |
Year fixed effects | Included | Included | Included | Included | Included | Included | Included | Included |
Constant | 3.096* | 2.780* | 2.855† | 2.588† | 1.485 | 2.978* | 2.03 | 2.505† |
(1.472) | (1.360) | (1.480) | (1.407) | (1.443) | (1.337) | (1.294) | (1.303) | |
Number of observations | 5,131 | 5,131 | 5,131 | 5,131 | 5,131 | 5,131 | 5,131 | 5,131 |
Number of individuals | 1,571 | 1,571 | 1,571 | 1,571 | 1,571 | 1,571 | 1,571 | 1,571 |
Serial correlation AR(1) test | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Serial correlation AR(2) test | 0.40 | 0.38 | 0.39 | 0.39 | 0.43 | 0.42 | 0.40 | 0.45 |
Hansen test of overidentification restrictions | 0.07 | 0.09 | 0.11 | 0.09 | 0.09 | 0.06 | 0.14 | 0.32 |
Diff.-in-Hansen tests of exogeneity of instrument subsets | ||||||||
GMM instruments for levels | 0.15 | 0.25 | 0.03 | 0.02 | 0.03 | 0.16 | 0.07 | 0.17 |
Predetermined (all variables but those treated as endogenous/exogenous) | 0.09 | 0.19 | 0.13 | 0.16 | 0.13 | 0.16 | 0.42 | 0.64 |
Endogenous (previous peer evaluation, industry evaluation, industry evaluation squared, interaction effects) | 0.12 | 0.24 | 0.13 | 0.07 | 0.20 | 0.30 | 0.33 | 0.80 |
Exogenous IV (money awarded by government’s science funding bodies) | 0.53 | 0.52 | 1.00 | 0.69 | 0.45 | 1.00 | 0.75 | 1.00 |
Exogenous IV (percentage of gross domestic expenditure on R&D in the business enterprise sector) | 1.00 | 1.00 | 1.00 | 1.00 | 0.69 | 1.00 | 0.85 | 0.76 |
Exogenous IV (introduction of a software system at Minerva to streamline grant and contract administration) | 0.37 | 0.43 | 0.62 | 0.77 | 0.79 | 0.13 | 0.55 | 0.74 |
Matched-paired sample in the matching year | Treated individuals | Control group | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
n | Mean | SD | Min. | Max. | n | Mean | SD | Min. | Max. | |
Faculty: Business school | 410 | 0.02 | 0.15 | 0.00 | 1.00 | 410 | 0.02 | 0.15 | 0.00 | 1.00 |
Faculty: Engineering | 410 | 0.33 | 0.47 | 0.00 | 1.00 | 410 | 0.33 | 0.47 | 0.00 | 1.00 |
Faculty: Medicine | 410 | 0.46 | 0.50 | 0.00 | 1.00 | 410 | 0.46 | 0.50 | 0.00 | 1.00 |
Faculty: Natural science | 410 | 0.19 | 0.39 | 0.00 | 1.00 | 410 | 0.19 | 0.39 | 0.00 | 1.00 |
Year | 410 | 2007 | 1.89 | 2004 | 2011 | 410 | 2007 | 1.89 | 2004 | 2011 |
Industry experience | 410 | 0.18 | 0.75 | 0.00 | 6.00 | 410 | 0.06 | 0.42 | 0.00 | 5.00 |
Patents | 410 | 0.28 | 0.76 | 0.00 | 6.00 | 410 | 0.12 | 0.51 | 0.00 | 6.00 |
Previous peer evaluation | 410 | 4.01 | 3.27 | 0.00 | 20.00 | 410 | 2.67 | 2.72 | 0.00 | 19.00 |
Tenure | 410 | 11.70 | 7.51 | 3.00 | 42.00 | 410 | 11.35 | 8.00 | 3.00 | 42.00 |
Team quality | 410 | −2.54 | 2.94 | −6.15 | 8.87 | 410 | −3.73 | 2.84 | −6.15 | 3.85 |
RiccardoFini (riccardo.
Julien Jourdan (julien.
Markus Perkmann (m.