This chapter addresses the questions outlined in the statement of task (see Box 1-1 in Chapter 1) concerning best practices for supporting team science, including those related to training and education, virtual collaboration, and incorporating nonscientist team members. Drawing from the literature, committee expertise, and expert presentations, the chapter summarizes the current understanding of which best practices to implement within a science team, how to implement them, and when they could be applied to enhance team effectiveness. The committee defines best practice as an activity or strategy that can enable a science team to collaborate effectively to achieve its goals; it is synonymous with the term team development intervention in the broader literature on team effectiveness (Shuffler et al., 2018).
The chapter begins with an overview of best practices related to education, training, and professional development opportunities to support team science. These opportunities form the foundation of a healthy team science ecosystem and prepare individuals to collaborate effectively as members of science teams. The chapter also articulates a series of best practices that can be implemented throughout the life cycle of a science team to potentially improve team effectiveness. Table 3-2 at the end of the chapter outlines suggested best practices discussed throughout.
As noted in Chapter 1, the success of a science team depends on the technical expertise of its members, known as taskwork competencies, and on their ability to collaborate effectively, reflected in teamwork competencies.
Thus, team science competencies, or the knowledge, skills, and abilities required for team science success, extend beyond scientific expertise to include competence in communication, coordination, conflict resolution, and the ability to work across disciplinary and organizational boundaries. By developing both taskwork and teamwork competencies, members of science teams can better navigate the complex dynamics of scientific collaboration.
Teamwork and taskwork competencies can be either context-specific or transferable (Cannon-Bowers et al., 1995; Cannon-Bowers & Salas, 1998). Context-driven competencies are necessary for a specific team working on a specific scientific task. These competencies are particularly relevant to teams with consistent membership that perform similar tasks over time. Team-contingent competencies are those specific to a particular team, but the competencies are more generic in that they can be applied across different scientific tasks. These competencies are particularly relevant to science teams that work together regularly but handle a variety of projects. Task-contingent competencies are specific to a type of task but can be applied across different teams, and they are valuable when tackling a scientific problem that may involve different teams. Finally, transportable competencies are both team and task generic in that they can be applied across a range of tasks and teams. These general competencies are crucial for effective collaboration in any scientific context. Therefore, interdisciplinary graduate programs and professional development for team science often emphasize them (Fiore et al., 2019). While many existing programs and professional development opportunities focus on those earlier in their careers, including students, it is important to note that seasoned faculty could also benefit from participating in training opportunities. Faculty have cited time demands, a lack of incentive, and a lack of career-stage-appropriate training as barriers to their participation in professional development activities (Brownell & Tanner, 2017; Vela et al., 2023).
Developing team science competencies, especially those transportable across tasks and teams, is essential for supporting the team science ecosystem. Building these equips individuals to contribute meaningfully, collaborate productively with others, and ultimately function as members of effective science teams, thereby driving the success of collaborative scientific endeavors.
Team science competencies are developed in various ways. Although many team science competencies are acquired through “on-the-job” learning, more structured efforts can cultivate these skills (see Fiore et al., 2019). In addition to the availability of online resources, such as those available through the International Network for the Science of Team Science1
___________________
1 To learn more about the International Network for the Science of Team Science and view available resources, please visit https://www.inscits.org
and the University of California, Irvine’s Team Scholarship Acceleration Laboratory,2 these efforts include workshops ranging from a few hours to several days or longer, courses focused on interdisciplinary topics within academic curricula, or short courses offered to faculty. The Enhancing the Effectiveness of Team Science (National Research Council, 2015) report spurred the development of the training program TeamMAPPS, which was based on identified knowledge, skills, and activities integral to successful team science work (Bisbey et al., 2021b). In addition, some programs incorporate team science competencies into their mentoring initiatives. The sections that follow provide examples of programs designed to develop team science competencies.
Workshops are organized, structured interventions designed to bring together individuals to engage in collaborative learning, skill development, and reflection over a few hours to a few days. Typically, workshops focus on specific learning objectives, such as fostering interdisciplinary understanding, enhancing competencies related to collaboration, and bringing together unique ideas to generate innovative and transdisciplinary frameworks. These objectives are accomplished through various interactive activities, such as discussion, practical exercises, and assessment. These activities can provide participants with a safe environment to explore different perspectives, practice their team science skills, and develop a greater epistemological awareness (Audi, 2010). Workshops can vary in both length and format, from multiday immersive programs to half-day informational sessions. This section continues by describing the results of several studies on the outcomes of science team training workshops.
One study, for example, explored how workshops can enable science and engineering doctoral students to develop a better understanding of the various perspectives held by experts from different disciplines (Gosselin et al., 2020). This study of dispositional and epistemological characteristics found that a 9-day workshop intervention exposed students to different ways of thinking. Gosselin et al. (2020) considered factors such as communicating different perspectives and valuing insights from others. Using retrospective pre- and post-module evaluation and data from questionnaires and group reflections, they assessed participants’ behavioral and dispositional differences, finding that the workshop helped students increase awareness about their own values associated with science knowledge and how these values differ from those of others on their team. The authors suggest that
___________________
2 To learn more about the Team Scholarship Acceleration Laboratory and access resources, please visit https://tsal.uci.edu
such workshops help students begin “their journey to understanding the importance of learning to navigate and negotiate dispositional distances and other forms of compositional diversity as part of collaborative processes” (Gosselin et al., 2020, p. 321). Although the assessments and interventions were proprietary in nature, these workshop formats could be followed with more open-source measures that examine similar competencies.
In support of ideation and knowledge integration, innovation labs are a type of workshop designed to support learning about others and developing new ideas for transdisciplinary research (Hawk et al., 2024). In one study, facilitators supported early career scholars in a creative problem-solving process and assessed participants on competencies such as collaboration readiness and interest in starting new research projects. This study also examined team formations and writing research grants, new publications, or both based on activities from the workshop. Compared with scientists who engaged in normal institute activities, Hawk et al. (2024) found little to no differences in attitudes toward collaboration or research productivity. However, they found increases in the intent to submit grant proposals among innovation lab participants. What is most useful about this approach is the attention to outcomes over a longer duration, with the researchers examining several factors over time, including collaboration readiness and collaboration network size at 12 months and 21 months postintervention, respectively (Hawk et al., 2024).
Another study examined learning and attitude change for faculty members who received seed funding awarded to support research idea development in interdisciplinary teams (Morgan et al., 2021). Attendees at this workshop were a mix of junior- and senior-level faculty in several disciplines, including social and life sciences, engineering, and humanities. The workshop, comprising two half-days, had several sessions targeting a set of team competencies meant to develop collaborative capabilities of team members. These ranged from team science knowledge, such as best practices, collaboration skills, including communication and interpersonal collaboration, and team regulation type concepts, such as goal setting and meeting coordination. Activities such as group discussion, quizzes, and templates for teamwork provided practice in these different competencies. To measure outcomes, the investigators used pre- and post-test change measures, along with subjective reactions to the variety of content in the workshop. In general, the participants’ subjective reactions to the workshop sessions were that they were useful. Among participants who attended at least one of the workshop sessions, there was evidence of some attitude changes, including rating themselves as more ready to collaborate, as well as increases in behavioral trust. However, there were no changes in team-focused concepts around clarity of roles, goals, or processes (Morgan et al., 2021).
Courses can be leveraged to develop team science competencies over a longer time period. By balancing lecture- and activity-based learning, courses provide opportunities to understand concepts and ideas in theory and then give participants opportunities to apply these learned competencies in practical, real-world contexts. Assessing learned competencies in this format may include reflective writing, oral presentations, team projects, declarative knowledge exams, and practical application of skills.
One study of a course designed specifically to improve team science competencies such as mentoring and debriefing found some increases in leadership self-efficacy and improvements in various facets of collaboration (Tumilty et al., 2022; see also Appendix B). This study measured leadership self-efficacy using the Kane-Baltes Leadership self-efficacy test, which assesses a person’s beliefs that they can perform successfully as a leader. The course, which used authentic learning activities, focused on interprofessional training for transdisciplinary teams, examining how to acquire specific (e.g., leadership training) and general collaborative competencies (e.g., grant writing, interactions with experts such as bio-statisticians). Finally, the participants received formative feedback regarding their work on proposal writing and oral presentations, for example. This course emphasized team science competencies, including collaborative behaviors such as monitoring and reflection, perspective seeking, and inquiring about and probing research ideas. Also included were tools such as interactive team contracts, which showed benefits such as addressing issues in the team. Participants showed beneficial effects on communication, integrating certain practices into their writing, such as open communication styles, empathic practices, and collaborative workflows (Tumilty et al., 2022). Overall, the methods used here can provide an effective template for both developing team–science relevant courses and evaluating their effectiveness.
One group of investigators developed coursework for promoting interdisciplinary education, with an equal focus on faculty instructors and students (Corbacho et al., 2021). They examined teaching practices that could trigger academic motivation and how different teaching teams used a common strategy to create a collaborative interdisciplinary environment, leading to consistent student experiences. The investigators also examined what distinguishes interdisciplinary courses from traditional ones. Using tools such as the Student Course Experience Questionnaire (Ginns et al., 2007) and the eMpowerment, Usefulness, Success, Interest, and Caring Model of Academic Motivation (Jones, 2009), Corbacho et al. (2021) assessed how well students recognized key components of interdisciplinary teamwork. In the context of teaching, this study identified some challenges,
including teachers feeling out of their comfort zones when applying team-building practices and creating open-ended problems.
In 1998, the National Science Foundation (NSF) created the Integrative Graduate Education and Research Traineeship (IGERT) program to both develop interdisciplinary competencies in students and support changes in scholarly culture at participating universities. Although NSF ended IGERT in 2013—NSF recently created the NSF Research Traineeship (NRT) program to similarly support interdisciplinary, evidence-based traineeships (National Science Foundation, 2024)—researchers have examined its effects and highlighted critical findings. For example, one report discussed the overall purpose of the program and summarized the formal and informal ways IGERT awardees developed interdisciplinary competence (Van Hartesveldt, 2016). Awardees of IGERT funding created new courses in a collaborative way to ensure that faculty with varied perspectives contributed to an interdisciplinary curriculum. From these courses, graduate students learned about other disciplines and each other as they engaged in research team projects.
One of the primary goals of these courses was to increase multidirectional communication skills, including communicating ideas effectively, especially with those from other disciplines. IGERT projects included informal mechanisms such as boot camps or summer sessions that provided more experiential forms of learning, as well as an opportunity for focused learning and interaction. More broadly, IGERT included cross-mentoring, where faculty guided students from different disciplines. In general, IGERT outcomes promoted students’ attitudes toward learning and cross-disciplinary research (Van Hartesveldt, 2016).
The same report described findings suggesting that IGERT students tended to produce dissertations that incorporated more disciplines (Van Hartesveldt, 2016). IGERT students were also more active in conducting interdisciplinary research and creating interdisciplinary programs. Additionally, the report highlighted longer-term analyses in which graduates of IGERT programs said they were still drawing on their experience and competencies gained while in their graduate program (Van Hartesveldt, 2016).
Mentoring programs are a great resource for developing team science competencies in a less formal environment over prolonged periods of time. Mentoring programs are structural initiatives that assign experts—mentors—to support their protégés’ development in specific activities that align with the mentoring program’s purpose.3 Mentoring programs use a
___________________
3 While informal mentor–protégés relations that develop more organically exist, they are not the focus of this section.
variety of frameworks (e.g., Montgomery & Page, 2018), such as a traditional hierarchical structure consisting of one mentor and one protégé, as well as nondyadic structures, including one mentor and one team, or team members mentoring each other. In the context of team science, these programs aim to enhance both teamwork (e.g., improving collaborative competencies and cross-disciplinary understanding) and taskwork (e.g., technical and research competencies) through such activities as direct mentoring interactions, experiential training, and collaborative exercises (National Academies of Sciences, Engineering, and Medicine, 2019; Rodríguez et al., 2021).
The federal government has been active in supporting mentoring programs for graduate students and early career professionals. The National Cancer Institute programs sometimes focus on developing both teamwork and taskwork by promoting an appreciation of different perspectives on scientific issues. These provide multiple mentors spanning a range of competencies that need to be developed based upon mentor needs. For example, the Transdisciplinary Research on Energetics and Cancer Centers initiative (Gehlert et al., 2015; Vogel et al., 2012) has postdoctoral scholars identify and select mentors that span disciplines with the goal of enhancing inter- and intrapersonal competencies. The Academic Learning Health System Scholars Program also assigns multiple mentors, including a primary mentor, co-mentor, Translational Research Training Program core faculty mentor, peer mentor, and health system mentor (Woodside et al., 2021). Along with this set of mentors, the postdoctoral scholars also participate in collaborative forms of training, including transdisciplinary research courses, journal clubs, and workshops to develop skills in collaborative writing. Even if not specifically identified as such, a primary objective of these programs is to improve teamwork and taskwork, as they are designed to expose trainees to multiple disciplinary perspectives, to increase research competencies, and to enhance attitudes about collaboration. The overall goal is greater scholarly productivity based on becoming a more team-oriented scientist.
The National Institutes of Health (NIH) Building Interdisciplinary Research Careers in Women’s Health program also includes a mentoring component.4 This program is designed for junior faculty interested in advancing cross-disciplinary research in women’s health; it provides multiple perspectives on a range of scientific and career issues. Nagel et al. (2013) evaluated this program using success indicators such as improved idea generation based on more grant submissions in women’s health. This program
___________________
4 More information about the BIRCWH program is available at https://www.mayo.edu/research/centers-programs/womens-health-research-center/education/building-interdisciplinary-research-careers-womens-health-program
identified a set of recommendations or best practices for ensuring success in team-based mentoring, including development of a written contract or agreement between participants to manage mentoring expectations (Guise et al., 2012) and clearly articulating roles for the mentoring team, such that some focus on career issues while others focus on scientific content (see also Guise et al., 2017).
More recently, the NIH Clinical and Translational Science Awards (CTSA) Program has developed an approach that focuses on teams of trainees. CTSA combines didactic and experiential training in team science that includes cross-disciplinary mentoring (McCormack & Strekalova, 2021). Team composition is also a focus; mentors are assigned two students pursuing PhDs or dual degrees (e.g., MD-PhD). In addition, these mentee teams need to be pursuing their doctoral degree in different disciplines and in different colleges. Thus far, there is evidence of an increase in collaborative activities of both the CTSA team trainees and mentors (McCormack & Strekalova, 2021). Some attitudinal changes are also evident: two-thirds of mentees noted that they plan to continue collaborations after training and to continue using the support tools, such as collaboration plans and author agreements, to which they were introduced in the CTSA program (McCormack & Strekalova, 2021).
In a recent review of team mentoring approaches, whereby mentees are assigned multiple mentors with the goal of providing a variety of disciplinary and professional perspectives, one report described how team mentoring can improve transdisciplinary science (Shah & Fiore, 2022). The authors noted that team mentors support professional development by providing career guidance and direction and improving exposure to critical people; these help mentees gain insights about political culture, organizational culture, or both. Furthermore, the mentoring team acts as advocates on behalf of the mentees and can help identify opportunities for research and critical resources needed for early career success (Shah & Fiore, 2022).
The same review provided a set of guidelines for team mentoring to help identify targets for collaborative competencies, such as active listening (Shah & Fiore, 2022). For example, mentors can demonstrate active listening with their team by asking follow-up questions when discussing complicated issues. As a complement, mentees can practice active listening by discussing research with those from other disciplines. Shah & Fiore (2022) pointed out that mentors can enhance competencies in assertive communication by creating a safe environment for arguing around research topics, where mentees can readily address their opinions and ideas in task-focused ways, ensuring that all ideas are being considered.
Shah & Fiore (2022) discussed guidelines for team mentoring in cultivating interpersonal competencies, including understanding coordination. For example, mentors could create artifacts delineating roles and goals on
the team and expectations for different mentors. In addition, mentees can provide feedback to mentors when they are unclear about roles or goals while also learning how to safely ask for assistance as needed. This coordination and delineation of goals becomes particularly important for multiteam systems, where teams have to balance both local and shared goals (Shuffler et al., 2015).
Another important interpersonal competency is cultivating an appreciation of varied perspectives (Shah & Fiore, 2022). For example, mentors can model respect for theories, methods, or both that come from different fields during team meetings. Mentees can learn how to better attend to, and maintain awareness of, any experience of positive or negative commentary about other disciplines, such as disdain for a particular field, and be comfortable discussing why such attitudes are problematic.
The past century of research on team effectiveness across industries has demonstrated that key teamwork processes—such as coordination, information-sharing, and conflict management—and emergent psychological states—such as cohesion, shared mental models, and transactive memory systems—are linked closely to achieving team objectives (Bell et al., 2018; Kozlowski & Chao, 2018; Mathieu et al., 2018). Moreover, research on team effectiveness has identified numerous best practices and team development interventions that teams and team leaders can deploy to support critical teamwork processes, emergent states, and team goal accomplishment. In this section, the committee articulates how to apply these best practices in the context of team science.
The committee framed its discussion of team science best practices according to the four stages of transdisciplinary research (Hall et al., 2012). The first stage, development, involves establishing a shared understanding of a scientific problem space and group mission. During the second stage, conceptualization, the team develops research questions, conceptual frameworks, and research designs to integrate and expand approaches from multiple disciplines. In the third stage, implementation, the team focuses on initiating, executing, and refining the planned research. Finally, translation involves applying research findings to develop innovative solutions for real-world problems.
Although these four phases are conceptualized as sequential, they are highly interconnected and are nonlinear for many teams (Hall et al., 2012). For instance, a team may cycle between the conceptualization and implementation phases many times before progressing to translation. This aligns with the broader teams literature, which emphasizes that teams cycle repeatedly through transition phases, marked by activities such as planning
and mission analysis, and action phases involving tasks such as coordination and back-up behavior (Marks et al., 2001). In addition, not all teams will engage in all four phases, particularly if transdisciplinarity is not a primary research goal. However, the overall organization of this section broadly mirrors the evolving and dynamic nature of science teams and how a team’s challenges, opportunities, and needs will shift over time, depending on the goals of the science. Basic science teams, for example, still engage in these early phases.
The remainder of this chapter outlines best practices for supporting a science team corresponding to the four phases of transdisciplinary research. Some best practices, such as those pertaining to leadership or virtual collaboration, can transcend all four team stages and are thus afforded their own sections. Understanding the phase within which a science team is situated and leveraging corresponding best practices during that phase improves the likelihood that the team will produce its desired outcomes.
In the development phase, a group of potential collaborators comes together to define the scientific or societal problem of interest, including its complexities and boundaries (Hall et al., 2012). Collaborators might be selected based on their relevant disciplines and perspectives to address the problem comprehensively. One framework, for example, identified several teamwork processes that are likely to be essential to team success during the development phase. The authors suggested that, during development, science teams can work to generate a shared mission and goal and ensure all members are aligned and motivated to pursue a common purpose. Another important process during the development phase is fostering critical awareness, where team members gain a broad understanding of the problem space and the different perspectives and expertise each team member brings (Hall et al., 2012). For example, as discussed in Chapter 1, community members and/or nonscientist team members may be instrumental in defining the problem space.
The authors also argue that the development phase can involve externalizing group cognition, which involves making explicit the team’s collective knowledge and thought processes (see Fiore & Schooler, 2004). Externalizing cognition plays a functional role in science teams in that it supports discussion and elaboration and helps teams identify points of agreement and confusion (Fiore & Wiltshire, 2016). Psychological safety, defined as the belief that a group is safe to engage in interpersonal risk-taking, is essential during the development phase, given that psychological safety encourages open communication, risk-taking, and the sharing of ideas that are critical for generating new knowledge as a science team
(Edmondson & Lei, 2014). It is important because it helps teammates feel safe to ask questions, challenge ideas, and share perspectives without facing consequence or judgments. Together, these processes lay the foundation for effective collaboration and set the stage for successful research or problem-solving efforts. The following best practices during the development phase can help achieve these developmental processes and emergent states.
Being intentional, strategic, and discerning when composing a team is a critical step toward effective team science, with the team’s purpose, needs, and tasks front and center. Team assembly includes (a) team task analysis, or thinking through the demands of the team; (b) team composition, or understanding the individual characteristics needed within the team; and (c) team member recruitment, or identifying and recruiting potential teammates.
A team task analysis is defined as “the process by which the major work behaviors and associated [knowledge, skills, and abilities] that are required for successful job or task performance are identified” (Arthur et al., 2005, p. 654). This analysis can be undertaken during and immediately after team assembly. Team task analysis involves articulating the key tasks the team will accomplish and determining how many individuals will be needed to perform those tasks; the complexity and interdependence of tasks; their frequency; and the specific knowledge, skills, and abilities required to complete them (Shuffler et al., 2018). A team task analysis can reveal which academic disciplines or areas of expertise are essential to achieve project goals. It can also help teams take a broader view of how tasks, especially those that are cross-disciplinary, may or may not align, potentially pointing to the need to prioritize generalist team members who can facilitate communication and collaboration across boundaries (Bammer, 2013). Multiteam systems face even greater challenges in aligning team goals, managing cultural differences, and coordinating tasks; identifying individuals who can facilitate this communication and collaboration can help with information exchange, trust-building, alignment of objectives, and maintaining system-wide cohesion (Carter et al., 2019; Kotarba et al., 2023; Zaccaro et al., 2020).
Following a team task analysis, the context, problem of interest, goals, timeline, funding, and other factors determine the appropriate size of a team (Hall et al., 2018). It is important that team size be commensurate with both the quantity and quality of tasks identified as necessary to solve a problem or test a hypothesis. Bibliometric analyses have found that larger teams tend to focus on more rapid development, adding incremental advances,
whereas smaller teams produce more innovative outcomes that take time to make an impact (Wu et al., 2019). Thus, team sizes will vary depending on the scientific objectives of the research. It is worth noting that larger teams have higher coordination costs due to the need to align more individuals, institutions, time zones, and so on (Berntzen et al., 2021; Faraj & Sproull, 2000; Forscher et al., 2023; Pendharkar & Rodger, 2009). Stronger efforts may also need to be made to ensure larger teams with weaker ties do not splinter into siloed objectives, tasks, and outcomes (Jeske & Olson, 2024).
Decisions around team size could also benefit from considering team freshness, or how incorporating new members and perspectives might impact an existing team’s performance. In an analysis based on article citations as a proxy for impact, for example, smaller teams were found to be more negatively affected when new members were added (Liu et al., 2022), potentially resulting from a more concentrated effect of being unfamiliar with one another’s work-related experiences, skills, styles, and values. The effect of a new member on team performance has been shown to differ depending on how similar the new member is to existing members in relational and task-related characteristics (Liu et al., 2023b). New members that differ from existing members in characteristics related to accomplishing work tasks are associated in positive reactions from existing team members, while those who differ in aspects such as trait likability are associated with negative reactions (Liu et al., 2023b). These findings are at odds with traditional research on teams that show how turnover can improve innovation (Levine et al., 2003) but may depend on how experts are coordinated (Newton et al., 2019). Such findings illustrate why more research is needed on science teams to identify what is extendable or not from the science of teams.
The team task analysis and identification of knowledge, skills, and abilities feed directly into team composition, or selecting individuals to form a team. Scholars have often emphasized that team selection decisions ought to consider both taskwork and teamwork competencies. For example, there is increasing support for selecting team members who demonstrate collaborative attitudes and behaviors, rather than those who exhibit more individualistic styles of thinking and working (Kilcullen et al., 2023). This concept, known as team orientation, encompasses both an individual’s preference for working in a team versus working alone, for example, and their belief in teamwork as an effective means to get work done and accomplish goals. Individuals with a high level of team orientation tend to contribute positively to team functioning by promoting higher levels of cooperation and creating an environment where the team collectively believes in its ability to succeed (Kilcullen et al., 2023). Trust and cohesion, emergent psychological states that have been shown to be critical for team effectiveness, are also enhanced when team members
are oriented toward teamwork (Kilcullen et al., 2023). The literature on team science recognizes the importance of team orientation, with scholars emphasizing that team-oriented individuals are more likely to engage in the collaborative processes necessary for scientific innovation (Fiore et al., 2019; National Research Council, 2015). In addition, team orientation is a key component of collaboration readiness (Hall et al., 2008).
Science teams might also prioritize including potential team members who exhibit a transdisciplinary orientation, a characteristic that develops over the course of an individual’s career, reflecting their values, attitudes, and beliefs as well as the behaviors necessary for effective cross-disciplinary collaboration (Misra et al., 2015). Going beyond merely appreciating or valuing interdisciplinary work, transdisciplinary orientation encompasses the cultivation of the specific competencies required for transdisciplinary work and a demonstrated history of engaging in such collaborations. However, as with valuing interdisciplinary collaboration, transdisciplinary orientation is underresearched.
Many other individual characteristics—such as agreeableness, conscientiousness, openness to experience, collectivism, and preference for teamwork—have been linked with team effectiveness in other contexts (e.g., Barrick et al., 1998; Bell, 2007). Moreover, science teams are likely to reap innovation and creativity benefits when team members with different identities and lived experiences are supported appropriately (Smith-Doerr et al., 2017). For example, research has shown that variety across demographic factors improves team performance and scientific impact (Hall et al., 2018). These teams can also be more effective at problem-solving, decision-making, and innovating, including when adopting participatory approaches that invite nonscientist members to inform research need, relevancy, feasibility, and outcomes (Tebes & Thai, 2018; Wallerstein et al., 2019). For example, gender has been the focus of statistical analysis using quantitative methods for studying patterns in scientific outputs such as publications and grants since the 2015 report (Aksnes et al., 2019; Fox et al., 2018; Huang et al., 2020; Jadidi et al., 2018; Kwiek & Roszka, 2021, 2022; Liu et al., 2023a; Nielsen et al., 2018; Smith-Doerr et al., 2017). Yang et al. (2022) found that science teams that have both men and women produce more novel and highly cited papers than single-gender teams. This advantage increases with greater gender balance and on average is consistent across team sizes and subfields and does not depend on the team leader’s gender; it applies to all science fields over the past 20 years (Yang et al., 2022).
Research in team science has also focused on examining aspects of heterogeneity beyond gender and discipline to include nationality and how that influences the collaboration process and research output of teams. Specht & Crowston (2022) examined scientific working groups that varied in terms of gender, discipline, and nationality and how team composition
influenced collaboration and outcomes. They found that key aspects of team composition had positive effects on the interdisciplinarity of the collaboration process, defined as the range of the disciplines of the journals the teams published in and those they cited in their publications. Specifically, Specht & Crowston (2022) found that teams with more women had greater cited discipline variety and teams with more discipline variety had greater cited discipline and publication records. In a related study, Smith-Doerr et al. (2017) found a positive correlation between the participation of more women in disciplines traditionally dominated by men and expanded scientific research agendas. However, there was a negative relationship between publication variety and the presence of different nationalities on the team (Specht & Crowston, 2022). Outcomes were measured by the number of publications and the impact of the group’s work by computing the median number of citations. This study also assessed team satisfaction and perceived team effectiveness and found that individual level of satisfaction was positively related to the number of publications the teams produced and with the proportion of women on the team. Moreover, the investigators found a negative relationship between country and work practices—as measured by the level of variety of articles sourced, for example—on personal satisfaction and perceived effectiveness of the team (Specht & Crowston, 2022).
However, increasing differences across demographic factors on teams has been described as a “double-edged sword” because of its potential for both positive and negative effects (Milliken & Martins, 1996, p. 403; Smith-Doer et al., 2017; Verwijs & Russo, 2024). Empirical evidence indicates the effect of heterogeneity on team outcomes is complex and may vary by how it is measured (Horwitz & Horwitz, 2007). Understanding these nuances is essential for leveraging individual differences to improve team performance. Several theories can help explain the potential for positive and negative effects associated with team variability. Information processing theory suggests that heterogeneous teams benefit from a wider range of information, knowledge, and perspectives, which can enhance decision-making and problem-solving (van Knippenberg & Schippers, 2007; Williams & O’Reilly, 1998). One area of organizational science research introduced the term faultlines, which are ways in which teams may split into subgroups based on one or more attributes (Lau & Murnighan, 1998). Research on faultlines has shown they are related positively to conflict and negatively to information elaboration, team performance, and team satisfaction (Thatcher et al., 2024). Social categorization theory posits individuals can view dissimilar team members as “out-group” members, potentially causing communication difficulties, conflict, faultlines, and reduced cohesion (Verwijs & Russo, 2024; Williams & O’Reilly, 1998). Several meta-analyses have grouped demographic variables into those that are highly job-related
or less job-related, as well as task-oriented (related to completing tasks) or relation-oriented (related to building relationships; e.g., Joshi & Roh, 2009; Webber & Donahue, 2001). One meta-analysis (Bell et al., 2011) examined variability and team performance relationship by focusing on specific variables rather than grouping variables, as was done in past meta-analyses. Bell et al. (2011) found that having team members who represented the different functional areas in an organization, such as marketing and finance, was consistently related positively to team performance, while demographic variables such as education-level, team tenure, and organizational tenure consistently demonstrated no relationship between variability and team performance.
To determine the extent to which potential team members possess the requisite knowledge, skills, and abilities for a specific science team, scholars have suggested conducting detailed interviews that collect information on disciplinary expertise, as well as values and competencies such as interactional expertise, behaviors, and team orientation (Knott et al., 2022). The team science literature offers guiding questions to gauge individual and team readiness—such as how much the team trusts one another to make individual compromises in favor of an integrated team approach, how available an individual is to collaborate, and how feasible it will be to work across the constraints of different participants and institutions (e.g., O’Rourke et al., 2019, p. 32).
Scientifically derived and empirically tested team readiness instruments represent another means by which a team can make decisions about its members. Both the Motivation Assessment for Team Readiness, Integration, and Collaboration instrument (Mallinson et al., 2016) and the Collaboration Success Wizard (Bietz et al., 2012) use survey responses to assess how individual motivations and orientations may coalesce into team-level opportunities and vulnerabilities. Another example, the Interdisciplinary Perspectives Index, measures the degree to which individuals value interdisciplinary work and captures their attitudes and emotional responses toward the significance of this work (Misra et al., 2009).
The process of assembling a science team is often complex and informal. First, there is the challenge of finding the right individuals. Although it might be clear what expertise or knowledge is needed for the team, locating individuals who possess those skills and are willing to participate in the team can be difficult. This process is often driven by informal networking, where researchers rely on personal contacts, recommendations from other scientists, recommendations from nonscientist partners, or discussions at academic conferences to identify potential collaborators (Lungeanu et al., 2014, 2018; Zajdela et al., 2022). Such approaches, while often effective, are also unpredictable and may leave team leaders with
limited options if they lack the necessary connections within their fields or across disciplines.
Indeed, recruiting science team members is unlike team assembly in many industries, where team formation can follow a traditional, structured process—for example, a leader identifies a clear goal, posts job ads, interviews candidates, and selects the best fits. Instead, researchers may initiate collaborations based on mutual interests, long-standing professional relationships, or chance encounters at conferences or workshops. Many science teams are formed first without a fully developed goal or project in mind, and the task itself may only take shape once the team members are assembled (Wang & Hicks, 2015). This informal process is further complicated by the fact that many scientific collaborations start with a vague idea or broad topic of interest, and only after initial discussions do the collaborators flesh out specific goals and methodologies. As a result, team members are often chosen not because they fit neatly into a predefined role, but because they bring a unique perspective or area of expertise that helps shape the project as it evolves. In these cases, the goals and tasks are developed alongside the team composition, rather than preceding it, and team composition can shift or expand as the project becomes clearer.
Moreover, even when scientists have a well-defined task or research goal in mind, assembling a team with the right mix of competencies and expertise is not easy. When team members lack extensive networks or are looking to expand into unfamiliar disciplines, technological resources, such as third-party scientific expertise databases, can facilitate collaborator searches (National Research Council, 2015). Artificial intelligence (AI) models can also be trained to analyze research networks and identify relevant experts (Sourati & Evans, 2023; Xu, 2025). It is important to note, however, that several AI systems have perpetuated biases in hiring decisions, including in aspects such as race and sex (Dastin, 2018; Lee et al., 2019). Nonetheless, technological resources are less helpful for identifying individuals who possess the necessary combinations of teamwork and taskwork skills.
An additional challenge is convincing identified individuals to join a team. This is no small task in the world of science, where researchers are often stretched thin across multiple projects, grants, teaching responsibilities, and institutional roles. Even if a scientist is interested in the proposed project, they might not have the time to take on additional work. Moreover, team members need to be personally motivated to participate, whether because they find the research question compelling, see potential for career advancement, or value the opportunity to collaborate with particular individuals (Adler & Chen, 2011; Bennett & Gadlin, 2012). Without sufficient intrinsic or extrinsic motivations, it can be difficult to persuade highly capable scientists and other potential team members to commit their time and energy to a new collaboration.
Convincing potential collaborators to join a science team often involves articulating a compelling vision for the project. Team leaders or initiators may need to communicate the scientific value of the project as well as how participation will benefit each team member. Whether it is through the promise of publications, opportunities for professional networking, or the ability to contribute to cutting-edge research, individuals need to feel that their involvement is worthwhile (Bennett & Gadlin, 2012). In addition, clear expectations around roles, time commitments, and contributions are crucial to ensuring that potential team members feel confident about joining. Without this, even highly motivated individuals may be hesitant to sign on. Thus, the process of science team assembly is not only about finding and recruiting the right people, but also about creating the right environment, one where potential team members see value in the project, feel motivated to contribute, and have clarity about what their roles will entail (Bennett & Gadlin, 2012).
After team assembly, onboarding team members in ways that increase both work-related and personal familiarity is important for team effectiveness (Tannenbaum et al., 2023). Onboarding can include sending welcome letters with essential information, such as team member biographies and key points of contact, and facilitated discussions to promote mutual understanding around a project’s central components and highlight individuals’ strengths (e.g., the Toolbox Dialogue Initiative; see Bennett et al., 2014; Hubbs et al., 2020). As discussed in the next phase, collaborative agreements (i.e., team charters) can establish a shared set of expectations and synchronize team members into a cohesive vision. Team membership is often not static over time, meaning onboarding activities may need to occur during any phase in which new members are added to build and maintain shared mental models of the collaborative work. For example, teams could appoint an onboarding buddy to help a new member integrate into a team (Tannenbaum et al., 2023). Onboarding strategies can focus on providing sensitivity training, co-creating a team culture that supports work–life balance, implementing universal design principles, and fostering mentor–protégé relationships within the team (Arslan et al., 2025; Behar-Horenstein & Prikhidko, 2017; Mosca & Merkle, 2024; Wellemeyer & Williams, 2019).
Onboarding strategies can incorporate team-building as a proven means of enhancing interpersonal relationships (Klein et al., 2009). Interpersonal relationships help to strengthen social ties on a science team, which can help both launch and sustain collaborations over the long term (Smith et al., 2016). By strengthening team bonds, team-building can foster team cohesion and trust and a team climate that promotes perseverance
and constructive conflict resolution (National Research Council, 2015). A recent case study highlighted the positive connection between interpersonal relationships and science team productivity (Love et al., 2021).
Research suggests that to be most effective, team-building efforts can be tailored to the team’s unique challenges, context, and needs (Klein et al., 2009; Lacerenza et al., 2018; Shuffler et al., 2018). A scientifically based team-building intervention capable of helping a team overcome its obstacles will focus on one or more of the following: goal-setting, interpersonal relationships, role clarification, and problem-solving (Shuffler et al., 2018). One meta-analysis showed that all four of these focus areas engender positive effects on team outcomes, with goal-setting and role clarification exerting the strongest effect (Klein et al., 2009). All members of a science team can play an active role in designing a team-building intervention, including identifying and communicating any challenges to be addressed. If, for example, a team determines that improving interpersonal relationships is the priority, a team-building strategy could include creating time for informal, social experiences, such as sharing a meal between or after working sessions. In a presentation to the committee, Emily Ackerman (Harvard Medical School) emphasized that team-building interventions can prioritize participation, considering as needed the physical aspects of a location, the timing and cost of the activity, and the social complexity and sensory levels involved.5
Effective communication and information-sharing are the core of successful teamwork. Communication can allow team members to exchange and integrate their varying knowledge, skills, and expertise, which can be critical in addressing the complex, interdisciplinary problems that science teams often face. Communication can facilitate team coordination, problem-solving, and goal alignment, ensuring that all members contribute meaningfully to the team’s objectives. Effective communication facilitates critical team processes, such as team learning behaviors, where members actively share knowledge, ask questions, and clarify misunderstandings (Harvey et al., 2022; Wiese et al., 2022). In addition, communication plays a pivotal role in whether critical key emergent states, such as psychological safety, emerge sufficiently (Frazier et al., 2017). In this way, communication drives task execution and fosters the conditions necessary for sustained team success. Communication is essential throughout all team phases and tasks, but given the importance of generating a shared language during the development phase, relevant evidence is presented here. Without clear
___________________
5 Presentation to the committee April 10, 2024.
communication, the flow of critical information can be disrupted, leading to inefficiencies, misunderstandings, and ultimately ineffective performance.
To communicate effectively, teams can develop a shared team language and vocabulary, especially for inter- and transdisciplinary efforts. Cross-training efforts within the team are helpful to this end, where members share key concepts and terminology from their respective disciplines that are relevant to the project (Falcone et al., 2019; Henson et al., 2020). The use of analogies may be particularly suitable for cross-disciplinary teams, as they can demonstrate individual team members’ knowledge application in more widely relevant ways (Graff & Clark, 2018) and uncover and bridge differences in how a team understands a shared project (Paletz et al., 2013). Similarly, integrating keywords from individual team members’ areas of study into a common team glossary can facilitate improved communication by ensuring team members either are using the same words in the same ways or are aware of discrepancies. In case of discrepancies, teams can benefit from collectively redefining terms or arriving at an alternative vocabulary. When possible, identifying and avoiding discipline-specific jargon can reduce frustrations and misunderstandings and contribute to team effectiveness (Henson et al., 2020).
Modes of communication are a key consideration throughout all project stages. Promoting communication practices such as the use of qualified and vetted sign language interpreters and real-time captioning and embracing different communication styles can help science teams unlock the full array of information and lived experiences to advance knowledge production and innovation. AI technology can also help facilitate communication that can increase participation, including through real-time translation tools (Johnson et al., 2017). When participating in cross-training activities, teams can strive to use plain language summaries. Visuals can also be used to support text and speech, but not as the only means of communication.
The development phase focuses on critical team processes such as generating shared goals, developing critical awareness, externalizing group cognition, and establishing a psychologically safe environment. Careful attention to best practices pertaining to team assembly, team onboarding and building, and developing a shared language can set team members up for success and help them understand and respect one another’s different perspectives, lived experiences, and values; engage in co-learning; and find common ground. This is crucial for effective knowledge integration during the conceptualization phase.
In the conceptualization phase of a research project, science teams develop research questions and hypotheses, build conceptual models, and agree on research designs and plans (Hall et al., 2012). As such, this phase establishes the directions a science team’s research will take. Essential team phenomena during this stage include developing shared mental models regarding taskwork and teamwork, continuing to generate a shared language, and fostering team norms and values (Hall et al., 2012). These processes ensure that team members have a common understanding of the project’s team, methods, and expected outcomes, which is essential for effective collaboration.
A team charter is a “formal document written by team members at the outset of a team’s life cycle that specifies acceptable behaviors in the team” (Courtright et al., 2017, p. 1462). Also known as a science prenup (Bennett et al., 2018), charters establish the foundation for effective team science by clarifying what teams will achieve, how they will accomplish their work, who will do what, and when the team will accomplish deliverables. Charters fit within the broader space of collaboration planning, a concept encompassing many factors that can affect taskwork and teamwork (Hall et al., 2019; Woolley et al., 2008) and are recommended as an activity to support teams (Shuffler et al., 2018). When formulated at the beginning of team interaction, ideally before the team has performed any work, charters operate as a psychological contract by setting mutual expectations and operating procedures and outlining what success looks like (Egeland & Schei, 2015). By helping the team agree on conflict resolution strategies and plan for potential challenges, such as authorship disputes, before they occur, misunderstandings regarding work effort and quality can be eliminated or reduced. Therefore, reviews in the broader team literature (Shuffler et al., 2018), the science team literature (Hall et al., 2019), and the translational science team literature (e.g., Begerowski et al., 2021) regard team charters as an evidence-based team development intervention for improving performance trajectories. Although typically implemented in smaller teams, research has also identified team charters as a best practice in multiteam systems (i.e., teams of teams; Carter et al., 2019).
Although widely implemented in practice, empirical research on team charters is limited in volume and sample, with most studies using nonscience teams and students (e.g., Aaron et al., 2014; Courtright et al., 2017; Egeland et al., 2017; Johnson et al., 2022; Mathieu & Rapp, 2009; McDowell et al., 2011). However, existing research concurs that team
charters positively predict team functioning and processes and are associated with higher levels of communication, effort, mutual support, cohesion (Aaron et al., 2014; McDowell et al., 2011), member satisfaction (Aaron et al., 2014), motivation, team efficacy (Johnson et al., 2022), and information integration (Woolley et al., 2008).
Although team charters tend not to influence team performance directly (e.g., Courtright et al., 2017; Johnson et al., 2022), studies have demonstrated team performance enhancements when team charters were combined with task-appropriate expertise (Woolley et al., 2008) and taskwork performance strategies (Mathieu & Rapp, 2009). Task cohesion has been found to be a key mediator of the team charter quality–team performance relationship (Courtright et al., 2017). In addition, charters boosted team performance by enhancing teams’ ability to navigate disruptions (Egeland et al., 2017). Team charters can potentially further promote psychological safety, particularly for heterogeneous science teams, through methods such as establishing clear policies around including those with individual differences, codifying flexible schedules and remote work options to accommodate different working styles and needs of all team members, designating specific channels or individuals as points of contact for team members to feel safe disclosing their concerns or needs, and establishing not only clear ground rules for operating as a team but also protocols for how to address grievances or rectify harm when rules are not followed.
Team charter creation can diverge in significant ways, with the final product being a written document that outlines the team’s purpose, objectives, responsibilities, and operational procedures. In some studies, team members prepared responses independently prior to group discussion (e.g., Mathieu & Rapp, 2009). In addition, some teams used trained facilitators to guide members through topics, probe for deeper insight, and request participation from all members (e.g., Rolland et al., 2021b). Topics discussed vary substantially, given differences in the type of team and the nature of team tasks (e.g., Egeland et al., 2017; Hall et al., 2019; Mathieu & Rapp, 2009). To illustrate, one group identified ten components to address in a collaboration plan for science teams (Hall et al., 2019):
Interdisciplinary science teams also need guidance on authorship policies, data and information management, and conflict management (Rolland et al., 2021a). Many science teams qualify as multiteam systems (Carter et al., 2019), necessitating that multiteam charters include identifying boundary spanners between component teams, determining inter-team leadership, agreeing on how team goals will be aligned, and anticipating inter-team friction (Asencio et al., 2012).
Although conflict is often seen as something to avoid, it is an inevitable aspect of teamwork and even a necessary component of collaboration, especially in scientific contexts where various viewpoints are essential for innovation and problem-solving (Fiore et al., 2015). Conflict arises when individuals bring unique perspectives and experiences to the table, leading to differing interpretations of the problem space (Weingart et al., 2015). These differences create perception gaps, where team members have varying understandings of the task at hand. Research on conflict resolution in science teams is scarce; however, a first step toward developing a conflict management protocol is distinguishing between the three types of conflict types identified in the literature: task, relationship, and procedural/process (Jehn, 1997; O’Neill et al., 2013).
Task conflict refers to disagreements related to the content and outcomes of the team’s work, focusing on ideas, viewpoints, and strategies. When facing task conflict, fostering mutual understanding across team members, such as through facilitated discussions guided by the Toolbox Dialogue Initiative, as mentioned in the development phase, could be effective (Hubbs et al., 2020). Relationship conflict, on the other hand, involves personal disagreements and interpersonal tensions that are not task-related, often stemming from personality clashes or emotional incompatibilities. Highly intentional approaches to team member selection that work to surface individual orientations, values, and personalities during team assembly could help mitigate potential relationship conflicts; however, an impartial external facilitator may best handle relationship conflicts that emerge in later stages of teaming. Finally, process conflict arises from disagreements about how the team’s work, such as task delegation and timelines, will be carried out. A strong team charter can specify roles, responsibilities, and timelines to which every member of the team is asked to agree.
Whereas some studies have examined the simple existence of a charter (e.g., Egeland et al., 2017; Rolland et al., 2021a), others have emphasized the importance of charter quality (Courtright et al., 2017; Mathieu & Rapp, 2009). High charter quality is characterized as being detailed; broad
in scope, covering more aspects of team functioning; and high on professionalism, where the “content of the team charter is clearly and consistently laid out and is presented in a form that makes its content understandable” (Courtright et al., 2017, p. 1464). In addition to quality, a team’s engagement with its charter throughout its life cycle is an important consideration. In a study of nearly 1,900 teams, one group found that implementing a charter at the beginning of a team’s life cycle can help improve its processes (Johnson et al., 2022).
Regardless of the exact form a team charter takes or how it is developed, science teams will benefit when all members, including late-joining ones, are included in the charter development process; the discussion fosters trust, commitment, and buy-in for decisions that affect subsequent team interactions (Rolland et al., 2021a). All members of a science team can review and sign off on the written product, ensuring a mutual agreement and commitment to the charter’s principles (Byrd & Luthy, 2010). Science teams can both adhere to the content of their team charter and treat it like a living document that requires periodic review and updating to evolve with the team (Mathieu & Rapp, 2009).
When establishing a team charter and developing a project design, science teams need to engage in extensive team planning. Team planning refers to the processes through which a team sets goals, defines roles, outlines tasks, and organizes resources collaboratively to achieve a shared objective. It might involve creating a strategy that aligns the team’s efforts and ensures all members are working toward common outcomes. For instance, developing collaborative research questions and hypotheses may require revisiting the team’s inventory of expertise, interests, capacities, resources, and timelines to identify points of integration among team members.
Unfortunately, many teams struggle to plan effectively. The broader literature on team functioning emphasizes that teams often exhibit critical process losses during planning that can severely limit the effectiveness of team plans (Montoya et al., 2015). For instance, team planning can be hindered by a tendency for teams to focus on and discuss shared information all team members know, rather than discussing unique information that only one or a few members possess (Stasser & Titus, 1985). This shared information bias can limit a team’s ability to make optimal decisions, as critical insights may be overlooked. Prediscussion preferences can also create challenges, as team members often enter discussions with preconceived ideas, making it difficult to integrate new information. This can result in decisions being based on entrenched opinions rather than logical evaluation of all available data.
Uneven participation in discussion is another common issue, particularly in larger teams or those with individuals who have dominant personalities. Another problem is group escalation, which refers to the tendency of teams, especially highly cohesive ones, to commit to poor decisions resulting from pressures for conformity (Liao et al., 2004). Group escalation can lead to overconfidence and a failure to explore alternative options thoroughly. Moreover, teams often experience planning aversion, where they skip the planning phase entirely, particularly when under pressure to act quickly (Montoya et al., 2015). A lack of structured planning leaves teams unprepared when faced with unexpected challenges (Montoya et al., 2015). Each of these issues highlights the need for careful management and strategies to ensure effective team planning and decision-making.
To mitigate the typical problems associated with team planning, teams may need to implement a series of interventions. For example, to mitigate the tendencies for prediscussion preferences or shared information to dominate the conversation, leaders might frame the planning process as a problem-solving task rather than a judgment-based one. When teams focus on solving a problem, they are more likely to exchange novel insights and explore new solutions rather than merely validating existing opinions (Griffin & Guez, 2014). This approach encourages teams to gather all relevant information before making decisions, similar to methods used in creativity research (Wang & Nickerson, 2017).
Matching approaches, disciplines, and methodologies to the problem’s demands, rather than to any individual’s attachments, is important for designing a study capable of addressing the complex problem or question identified. Because they are engaged in complex problem-solving, science teams often rely on boundary objects to scaffold their understanding of a problem and its elements (Star & Griesemer, 1989). Scholars originally conceived of boundary objects as “inhabit[ing] several intersecting social worlds and satisfy[ing] the informational requirements of each of them” (Star & Griesemer, 1989, p. 393). Boundary objects can be used to align individual work in multidisciplinary teams, for instance, or to produce novel pathways for more engaged collaboration outside an existing field or discipline.
For example, systems mapping can help a team match a collaboration to a problem of interest by identifying and displaying the interlinkages of relevant components, connections, and interested parties and then deciding what is in versus out (e.g., Braithwaite et al., 2018). A systems map can be seen as a type of boundary object through which members of a science team understand their project’s collective goals, how they contribute to those goals, and how their efforts intersect with those of others on the team. In collaborative problem-solving, these types of co-constructed artifacts have taken different names, including process-based descriptions, such as
model-based reasoning (Pennington et al., 2021), external representations (Fiore & Schooler, 2004), cognitive artifacts (Fiore et al., 2010; Hutchins, 1999), and coordination artifacts (Schmidt & Wagner, 2004).
To foster knowledge integration in science teams, some intervention research has incorporated cognitive artifacts with teamwork processes. One group developed a program based on the cognitive and learning sciences, where science team members worked to integrate internal mental models of a problem with externalized or visual forms co-created with their team (Pennington, 2016; Pennington et al., 2016, 2021). Although there has been some evidence of attitudinal changes following this intervention, it has yet to be tested rigorously. Any material or processual artifact could be considered a boundary object, so long as it holds meaning for each individual member of a team and facilitates collaboration through the shared mental models or understanding it promotes across the team. As such, boundary objects can support inclusive design for team science (Star, 2010).
Another approach science teams may use to leverage collective knowledge and strengths into integrative project designs is perspective-taking, where individuals adopt other team members’ viewpoints on a topic to better appreciate differences across backgrounds and fields and integrate disparate perspectives into a cohesive team vision (Hoever et al., 2012). Boundary objects could represent a useful starting point for perspective-taking activities, as members of a science team try to reflect on the same object from an intersecting lens other than their own.
Open dialogue around assumptions and disciplinary standards is a proven means of fostering integration in science teams (Piso et al., 2016). It is also important to practice and promote epistemic humility, which involves encouraging team members to be open about the strengths and limitations of their disciplines and to develop respect for and engage with new ways of knowing (Boix Mansilla, 2006, 2017). This practice can be informative to other team members, who may be unaware of the constraints present in other disciplines and other organizations (Castillo et al., 2024). Boundary objects, cross-training, and dialogue during team planning and project design can together help science teams avoid disciplinary capture, or scenarios in which one of the collaborating disciplines overshadows the others in terms of decisions made and research directions taken, hampering the integrative potential of a team science approach (Brister, 2016).
Planning the taskwork is integral to improving team processes and outcomes (Shuffler et al., 2018). Although sources in the literature recommend certain task-based strategies—such as building in task interdependencies to encourage team trust and cohesion (De Jong et al., 2016) and holding regular team check-ins to foster information exchange and team learning—team members’ preferences and needs (e.g., working style, platforms, and interdependence) need to be taken into account for motivation to remain
high (Park et al., 2013). In the committee’s experience, failure to account for such preferences during team planning and project design can result in frustration and even team member attrition.
To improve decision-making during planning, teams could also allocate time to discuss alternative strategies rather than focusing solely on initial preferences. This increases the likelihood of identifying effective solutions and avoiding decisions based on entrenched views (Tschan et al., 2009). Including members with conflicting preferences can stimulate richer discussions, as dissenting opinions push teams to engage in deeper, more critical evaluations (Schweiger et al., 1989). Ensuring that team members develop accurate cognitive models of the task and environment also helps, particularly in reactive situations. Leaders can play a key role in guiding teams to consider different perspectives and avoid fixating on initial assumptions.
Small groups naturally promote participation, as individuals feel a sense of responsibility to contribute (Dening et al., 2022). However, large groups may require different strategies. For example, leveraging asynchronous virtual communication platforms, such as message boards or chat rooms, can provide quieter members an opportunity to contribute, reducing the dominance of more vocal team members. In addition, counteracting the tendency to exert less energy when working in a group than as an individual and uneven participation is important (Dening et al., 2022). Teams can address motivation losses by increasing cohesion, identifying individual contributions, and holding members accountable for their input (Berengüí et al., 2021; Braun & Avital, 2006; Cady et al., 2018; Stewart et al., 2023; Whitworth & Biddle, 2007).
Finally, it is essential for teams to allocate sufficient time for planning. Without designated planning time, teams may skip this critical phase and resort to reactive decision-making. Using formal planning tools, such as charters or strategy outlines as discussed in the previous section, can ensure that key points are addressed during discussions. Teams under heavy performance demands can use low-workload periods to engage in planning, ensuring that preparation is not neglected when pressure mounts. Promoting a cohesive climate and setting challenging goals can further motivate teams to invest in detailed planning, recognizing the complexity of tasks ahead and the need for thorough strategies.
The conceptualization phase emphasizes the importance of laying strong, synergistic foundations for both teamwork and taskwork before any team science project begins. Best practices surrounding team charter development, team planning, and project design represent opportunities to build on team efforts in the development phase, strengthen shared mental
models, and provide a clear path forward. This foundation is crucial for effective operationalization during the implementation phase.
In the implementation phase, science teams execute the project plans established during earlier stages (Hall et al., 2012). Even when science teams have diligently followed best practices during the development and conceptualization phases, the transition to implementation is rarely seamless. During this phase, teams need to engage in several complex processes, such as experiment execution, that require integrating teamwork and taskwork. It is important for the team to periodically revisit and ensure alignment with its shared goals and vision. Effectively engaging in the implementation phase requires regular interactions among team members, sustained effort, and continuous monitoring of progress toward scientific goals. This includes communicating openly, providing backup support when needed, giving constructive feedback, and coordinating actions according to task demands (Hall et al., 2012; Marks et al., 2001). In the following sections, the committee provides a detailed overview of two critical best practices—project management and team debriefs—during the implementation phase.
Once a team enters the implementation phase of its scientific work, managing the project effectively becomes essential. Project management involves the systematic and deliberate application of knowledge, tools, and expertise to ensure the successful completion of complex projects in a timely and efficient manner (Sutton et al., 2019; Wuchty et al., 2007). A cornucopia of project management practices has been shown to work outside of science teams, and programs are being developed for implementing these practices in the science team context (Brasier et al., 2023a; Steiner et al., 2023; Sutton et al., 2019). Project management, however, is not limited to a single individual within the team. Multiple members can engage in project management activities, contributing to task organization, communication, and coordination across the team. In the following, the committee highlights some of the best practices most applicable to science teams.
Over the past 2 decades, an entire science devoted to understanding team meetings has developed (Rogelberg, 2019; Rogelberg et al., 2006; Wolf et al., 2024). Meetings can enhance collaboration, facilitate exchanging information, and improve decision-making by providing a structured platform for communication and coordination (Allen & Rogelberg, 2013). When managed properly, meetings can foster employee engagement,
promote team cohesion, and align individual tasks with broader team goals (Mroz et al., 2018). Moreover, well-facilitated meetings can create a space for shared leadership and help balance individual and team-wide objectives, which are particularly important in multiteam systems (Wolf et al., 2024).
On the other hand, poorly run team meetings can lead to disengagement, frustration, and wasted time. And if meetings are too frequent, too long, or lack clear objectives, they can contribute to meeting fatigue and reduce productivity (Allen et al., 2012; Mroz et al., 2018). In multiteam systems, an overemphasis on meetings without sufficient inter-team task interdependence can be counterproductive, hindering performance and collaboration (Wolf et al., 2024). In addition, meetings that do not encourage participation or are dominated by a few voices can lead to missed opportunities for varying input and innovation (Allen & Rogelberg, 2013). Meetings and discussions can also be exclusionary if they do not incorporate accessible solutions such as sign language interpreters, real-time captioning, or assistive listening devices when needed.
Several strategies can help science teams maximize the effectiveness of their meetings. Key practices include having a clear agenda, starting and ending meetings on time, and ensuring that only essential personnel are present (Mroz et al., 2018). These strategies not only help meetings run efficiently but also avoid wasting time for individuals whose presence may not be required. In addition, those responsible for managing meetings can regularly assess whether a meeting is truly necessary. If there is no pressing need to meet, even for regularly scheduled meetings, it is entirely appropriate to cancel them (Rogelberg, 2019). Those managing the meeting can also play a crucial role in creating a welcoming environment where all participants feel psychologically safe to share their ideas and perspectives (Allen & Rogelberg, 2013). This promotes open communication and collaboration. In multiteam systems, managing meetings can be more complex, but maintaining a balance between team-specific and system-wide goals is essential for sustaining productivity (Wolf et al., 2024). Finally, regular feedback and follow-up on meeting outcomes, such as distributing meeting minutes and reviewing action items, are critical for ensuring accountability, maintaining momentum, and maximizing the effectiveness of team meetings (Rogelberg et al., 2006; Wolf et al., 2024). Artificial intelligence (AI) tools, such as Otter.ai, ChatGPT, and Fireflies.ai, can be effective in transcribing meetings and summarizing key points and discussion for later access by all team. In the context of science teams acting as multiteam systems, it is recommended that meeting preparation and planning be incorporated into the multiteam charter (Asencio et al., 2012).
Facilitators, whether internal or external to a collaboration, can help science teams make the most of their meetings together. With a
focus on both the activities and interactions of teams (Bens, 2012), facilitators integrate teamwork and taskwork holistically to achieve meeting goals (Wróbel et al., 2021). Science team meetings can benefit from facilitation in a more traditional sense, whereby a facilitator consults on meeting design, tracks progress and time, fosters equitable and inclusive conversations, and establishes the right meeting tone and environment (Wardale, 2013; Wróbel et al, 2021). Conventional facilitation can further help science teams structure and guide crucial processes such as problem-solving, reaching consensus, decision-making, conflict resolution, and navigating different stages of a team (e.g., forming, norming, storming, performing; Kaner, 2014; Tuckman, 1965). Through thoughtful and intentional team interactions, facilitation can enhance idea generation (Kramer et al., 2001) and engagement (Parker, 2020) as important contributors to overall team performance. However, recent work has pointed to the need for facilitation practices designed specifically for boundary-spanning science teams (Cravens et al., 2022; Graef et al., 2021a,b).
One group proposed that science facilitation uniquely intersects collaborative science expertise with interpersonal expertise to align with the distinct purpose and challenges of knowledge-producing teams (Cravens et al., 2022). Meetings, then, are opportunities for science facilitators to create enabling conditions for “resolving many of the interpersonal and conceptual challenges of interdisciplinarity” (Graef et al., 2021b, p. 109). Challenges unique to science teams include navigating the ambiguity of novel cross-disciplinary collaborations, reconciling epistemic discrepancies, partnering with nonscientists, and negotiating disparate scientific priorities and approaches into a cohesive project capable of solving a real-world problem. Science facilitators can integrate these team science challenges into meeting activities and design (Graef et al., 2021a), and thus capitalize on meetings’ immense potential to contribute to the collaborative research process (Graef et al., 2021b).
For example, science facilitators can guide scientific visioning in a team meeting by creating a shared conceptual framework. In doing so, they can help science teams bridge cross-disciplinary communication, identify points of scientific contention, and prompt for linkages and uncertainties until achieving an inspiring, integrated vision that represents and holds meaning for every individual on a team. A science facilitator can also help a team match its meeting sessions to its phase in the scientific process. During conceptualization, a science facilitator may prioritize sessions featuring whole-group discussions where everyone can share their perspective with and learn from the full array of collaborators. A science facilitator can, however, recognize that session formats during the implementation phase may need to shift to small breakouts based on tasks or even to individual quiet working time. In these ways, science facilitation can operationalize
the intersection between collaborative science and interpersonal dynamics to advance team science (Hall et al., 2018).
As indicated throughout this chapter, communication is vitally important for the success of science teams. However, no matter how much planning is done in the development and conceptualization phases, communication challenges are inevitable when the implementation phase begins. Factors such as disciplinary silos, power dynamics, differing communication styles, and coordination across time zones and locations can create communication challenges for science teams (e.g., O’Rourke et al., 2023). Still, actions can be taken during the implementation phase to mitigate some of these challenges.
One general best practice is fostering a structured dialogue that creates an environment conducive to purposeful and meaningful exchanges among team members. Teams can prioritize cultivating a positive communication culture where all members feel safe and encouraged to share their perspectives, as this leads to more successful outcomes (Cason et al., 2020; Hubbs et al., 2020; O’Rourke et al., 2023). In addition, structured feedback mechanisms, such as regular updates or debriefs (discussed in greater detail later), are essential for maintaining productive communication during the implementation phase. Even in teams with generally effective communication, it is important to monitor interactions for acute challenges that, if left unchecked, can develop into chronic communication issues (O’Rourke et al., 2023).
A particularly effective practice is turn-taking, which ensures more even distribution of speaking opportunities in team meetings, allowing team members to share the floor rather than letting a few individuals dominate the conversation. Any team member, whether the leader, a participant, or an external facilitator, can implement turn-taking. One case study of using an external facilitator found that teams practicing even turn-taking tended to have higher success rates because it promoted the balanced exchange of ideas and enhanced collective problem-solving (Love et al., 2022). Although interactional best practices, such as turn-taking and creating space for social time, can be incorporated into virtual team settings, the consensus in the literature is that in-person meetings with face-to-face communication lead to more effective team science, especially during times of problem-solving and trust-building (Henson et al., 2020; Zajdela et al., 2025).
As the project matures, teams may consider developing information-exchange protocols that consider team members’ preferences to stipulate when, how, and to whom project updates or information will be communicated (Zajac et al., 2021), with quality of communication prioritized over quantity (Marlow et al., 2018). During check-in meetings, it is important for team members to discuss more than surface-level updates and instead
share honest responses around successes, failures, and unknowns. Allowing other members of the team to process such information can lead to improved understanding of where another member of the team can be deployed to help solve a problem, for example. In addition to discussions, it is also important to determine specific next steps to be taken between regular check-ins to ensure forward momentum is maintained.
Ongoing monitoring and assessment during the implementation phase is a crucial best practice (see also Chapter 5). Having a clear understanding of how the team is performing at any given time is beneficial, but there are specific considerations regarding what is being monitored, when it is monitored, and how the monitoring is conducted. First, it is essential to target specific attributes of the team to assess. Teams can be evaluated on various factors such as performance outcomes (e.g., quality, accuracy, and timeliness of the work produced) or on process and emergent state metrics (e.g., satisfaction, trust, cohesion; Rosen et al., 2008; Shuffler et al., 2018). When deciding what to measure, those managing a project need to consider whether the assessed factors are related to the team’s overall goals directly. For example, if the goal is to foster innovation, the team could measure factors such as creativity, the generation of new ideas, and disciplinary integration rather than focusing solely on traditional productivity metrics such as the number of papers published or the speed of task completion.
It is equally important to assess when to measure these dynamics. Some emergent states, such as trust, psychological safety, and cohesion, might be more meaningful when measured after significant events such as major project milestones, decision points, or conflict resolution (Carter et al., 2018; Kozlowski & Chao, 2018). Measuring team cohesion after an intense grant submission or research presentation, for example, could provide valuable insights into how the team responds to high-pressure situations and whether support mechanisms need adjustment. Other metrics, such as communication flow, may be appropriate to assess regularly throughout the project.
Finally, how a team monitors performance and process is critical for gathering actionable information. Different sources of feedback provide different perspectives, making it important to collect data from various team members, including principal investigators, junior researchers, and support staff. This variety ensures a more accurate and comprehensive team performance assessment, as each member may experience and interpret team dynamics differently (Wiese et al., 2015). Relying on a single source of feedback risks missing key insights, while integrating multiple data sources can highlight patterns, uncover hidden challenges, and support more targeted interventions.
Although the published research in the science team domain is limited, several tools have been identified for enhancing the project management process significantly. Project management tools are platforms, software, and structured methods that help organize, monitor, and manage project tasks, timelines, resources, and communications efficiently. These tools aim to streamline operations, reduce bottlenecks, and ensure that team efforts are well-coordinated for achieving project goals. Anecdotally, science teams often utilize such tools, but research on their specific applications in this context is still emerging, with recent studies highlighting their potential effectiveness (e.g., Gaffney et al., 2019; Steiner et al., 2023; Timóteo et al., 2021)
For example, one study examined the use of project management tools in a large, multisite lung cancer screening consortium to facilitate collaboration, efficiency, and productivity (Steiner et al., 2023). The tools employed in this case study included platforms such as SmartSheet for managing work plans, tracking data acquisition timelines, and maintaining project deliverables. In addition, the project used SharePoint and Microsoft Teams for document management and team communication, providing a centralized system for all sites to access essential documents and updates in real time. These tools allowed the team to maintain transparency; track progress; and coordinate complex, multisite research activities effectively.
Efficient resource-sharing is another key component of project management. Sharing resources such as data, software, and findings in real time can reduce duplication of effort, accelerate the research process, and ensure that all team members have access to the most current information (Santos et al., 2012). Leveraging collaborative document tools, such as Google Docs and Microsoft Teams, can allow team members to work on shared documents in real time, reducing email overload and ensuring everyone has access to the most up-to-date information.
Many project management platforms include integrated systems for documenting and archiving communications and decisions. This capability can help science teams maintain a comprehensive record of their projects’ development, which can support team communication, transparency, reproducibility, and onboarding of new members. Implementing a consistent file management system with clear folder structures and naming conventions helps everyone find information easily, further supporting efficient collaboration. Teams can additionally take steps to ensure that all documentation is in an accessible format.
AI-powered platforms can improve taskwork and teamwork by automating processes and improving coordination. These platforms streamline activities such as scheduling, task assignment, literature reviews, and document organization (e.g., Benchling, Connected Papers, ScopusAI). AI-driven project management tools can assign tasks, track progress, and automate
workflows so teams can focus on high-priority tasks (Jackson, 2022). Virtual workspaces such as GrantedAI and Kudos support specific stages of a project, including grant writing and research dissemination. AI tools can foster collaboration and efficiency by automating activities and coordinating efforts.
AI can also improve data and knowledge management systems (Jarrahi et al., 2023), enabling team members to access and share information easily, while intelligent collaboration platforms can optimize task allocation and track progress (Chen et al., 2017). AI tools, such as retrieval-augmented generation (Lewis et al., 2020), can organize, index, and retrieve relevant information from large datasets, providing team members with access to resources. AI tools can also support data anonymization and privacy management (Abay et al., 2019) and directly allow global model learning without sharing sensitive information across teams (McMahan et al., 2017).
A team debrief is a formal team development intervention that “turns a recent event into a learning opportunity through a combination of task feedback, reflection, and discussion” (Keiser & Arthur, 2021, p. 1008). Conducted periodically after completing significant team activities, team debriefs are structured sessions for reviewing and analyzing a team’s performance (e.g., Shuffler et al., 2018). The primary goal is to reflect on what happened, identify what went well and what went poorly, and determine ways to improve in the future (Tannenbaum & Cerasoli, 2013). A wide swath of industries, including manufacturing, education, information technology, aviation, the military, and health care, employs team debriefs (e.g., Chen et al., 2018; Duff et al., 2024; Eddy et al., 2013). The committee uses the term team debrief because it has been used most often in reviews of the literature (e.g., Keiser & Arthur, 2021; Shuffler et al., 2018; Tannenbaum & Cerasoli, 2013), including literature on science teams (Begerowski et al., 2021). However, other names include after-action reviews (e.g., Department of the Army, 1993), post-mortem evaluations (e.g., Kasi et al., 2008), huddles (e.g., Reiter-Palmon et al., 2015), reflexivity (e.g., Tesler et al., 2018), and guided team self-correction (Smith-Jentsch et al., 2008).
Research on debriefs has been conducted primarily on nonscience teams. Multiple meta-analyses demonstrate that team debriefs are associated with higher team performance (e.g., Keiser & Arthur, 2021, 2022; Tannenbaum & Cerasoli, 2013). Compared with no-debrief teams, debrief teams improved team performance and that of the individuals within teams by approximately 25% (Tannenbaum & Cerasoli, 2013). A later meta-analysis found even higher performance effects after increasing the number of included studies (Keiser & Arthur, 2021). Mediators of the team
debrief–performance relationship include workload-sharing (Vashdi et al., 2012).
In addition to task performance, team debriefs can influence and improve attitudes, task knowledge, and team processes (Keiser & Arthur, 2021). Research has shown that debriefs enhance team adaptation (Abrantes et al., 2022) and leadership development (DeRue et al., 2012), and reduce decision time (Qudrat-Ullah, 2007). Meta-analyses have demonstrated the efficacy of debriefs in small (2 members), medium (3–5 members), and large (6–16 members) teams (e.g., Keiser & Arthur, 2022). In addition, debriefs are effective in both geographically dispersed and face-to-face settings (Keiser & Arthur, 2022). Furthermore, team debriefs are especially useful in high-complexity and ambiguous task environments that offer no intrinsic feedback (Keiser & Arthur, 2022). Therefore, in addition to action teams—in the military and health care—debriefs can be effective for project and decision-making tasks (Keiser & Arthur, 2022). Given the strength and generalizability of these results, debriefs are a straightforward, inexpensive, and easily implemented way to facilitate team effectiveness (e.g., Shuffler et al., 2018; Tannenbaum & Cerasoli, 2013). Based on the research conducted in the broader team literature, one analysis identified team debriefs as a central team development intervention for science teams (Begerowski et al., 2021). However, research on debriefing in science teams is needed.
Fundamental components of team debriefs include feedback, reflection, and discussion about specific performance events (Keiser & Arthur, 2021). Note that debriefs extend beyond feedback in that they are collaborative and ideally include all team members rather than just the team leader offering comments (Tannenbaum & Cerasoli, 2013). Additionally, a defining feature of team debriefs is self-learning, in which team members are actively involved in self-discovery rather than receiving feedback passively (Tannenbaum & Cerasoli, 2013). In addition, whereas traditional feedback emphasizes outcomes, the focus of debriefs is the processes that contributed to successes and failures (Allen et al., 2018; Tannenbaum & Cerasoli, 2013). Because debriefs are developmental in intent and are carried out to promote self- and team-learning, as opposed to evaluative for administrative decision-making (Tannenbaum & Cerasoli, 2013), outcome feedback regarding success or failure can be provided after debriefs (Salas et al., 2008).
Despite these similar components, there is substantial variation in how debriefs are conducted across studies and organizational settings (e.g., Eddy et al., 2013; Smith-Jentsch et al., 2008; Smith-Jentsch & Sierra, 2023). However, meta-analytic findings support using the original structure for debriefs developed by the U.S. Army (Department of the Army, 1993; Keiser & Arthur, 2022). This structure includes reviewing intended objectives, actual outcomes, effective actions, ineffective actions including near misses,
intended future objectives, and the strategy for achieving the intended future objectives (Department of the Army, 1993; Keiser & Arthur, 2022). Debriefs can include asking open-ended questions, including: What were we trying to accomplish? What happened in the team event? Where did we succeed in meeting our goals? Where did we fail to meet our goals? What caused our results? What can we start, stop, and continue doing? What are the important takeaways and lessons learned? (e.g., Keiser & Arthur, 2022). Identifying actionable next steps is critical to debriefing effectively (e.g., Chen et al., 2018; Salas et al., 2008). Following the group discussion, the debrief can be paired with outcome feedback and individual or team training as needed (Salas et al., 2008).
Researchers have provided additional evidence-based best practices in response to the variability in debrief implementation (e.g., Keiser & Arthur, 2021, 2022; Salas et al., 2008; Tannenbaum & Cerasoli, 2013). For example, team debriefs can be conducted as soon as possible after completing a significant team event, such as achieving a major deliverable; time period, such as after a shift; or training, such as after a simulation (Tannenbaum & Greilich, 2023) to avoid members forgetting important details with the passage of time (Salas et al., 2008). Levels of analysis can be aligned such that tasks, training, and criteria match for individuals and teams (Keiser & Arthur, 2021; Tannenbaum & Cerasoli, 2013).
Instead of a general performance overview, debriefs can provide specific examples of competencies and deficiencies (Tannenbaum & Cerasoli, 2013). As such, it is critical to establish and maintain high psychological safety so that members feel comfortable sharing errors and mistakes in a supportive rather than judgmental environment (e.g., Kolbe et al., 2020; Salas et al., 2008). It is also advised that feedback be supported with objective data, such as text or video recordings, rather than relying solely on memory (Keiser & Arthur, 2021; Salas et al., 2008).
Although one study found that using trained facilitators in debriefing activities enhanced team performance (Tannenbaum & Cerasoli, 2013), a later meta-analysis concluded the picture was more complex (Keiser & Arthur, 2021). Specifically, individual tasks benefited the most from having a facilitator, but team tasks benefited from a self-led approach, especially when combined with objective review media (Keiser & Arthur, 2021). Additional debrief characteristics interacted with one another to affect team outcomes, requiring more nuanced implementation guidelines (Keiser & Arthur, 2021). To illustrate, a highly structured debrief was more effective for action teams, such as those in the military, whereas less structure was needed in health care and other industries (Keiser & Arthur, 2021). Moreover, shorter debriefs of a maximum of 20 minutes were best for teams, especially in health care, whereas longer debriefs of a minimum of 20 minutes may be needed for individuals to allow sufficient time to cover
all the feedback (Keiser & Arthur, 2021, 2022). These results highlight the importance of considering debriefing characteristics in combination rather than implementing them in isolation (Keiser & Arthur, 2021).
The implementation phase provides an opportunity for the team to sustain or reignite the initial enthusiasm for the scientific collaboration, which can help sustain long-term progress. To achieve team goals, team members need to actively interact, coordinate, and adapt. This requires project management, where actions are taken to ensure teams engage productively and stay on track, alongside regular team debriefs, which allow the team to reflect on progress, identify challenges, and adjust strategies as needed. Maintaining trust through transparency and reliability is essential during implementation. Trust fosters cooperation, promotes the sharing of resources and knowledge, and enhances the team’s motivation to achieve their objectives. High levels of achievement and trust will help teams continue their work together in the translation phase.
Research translation is the process of moving the findings of scientific research from the laboratory environment to human studies and ultimately into policy and practical applications that affect society directly. Because research translation requires understanding both the basic science and the context in which the innovation will ultimately be applied, science teams can strategically include team members—scientists, community advocates, patients, policymakers, and industry partners—with a range of expertise, experience, and reach.
U.S. government agencies have long affirmed the role of science teams in closing the gap between scientific discoveries and the application of those discoveries into tangible outcomes. For instance, in 2011, NIH established the National Center for Advancing Translational Science (NCATS)6 to support developing and implementing innovative processes, technologies, and methods for addressing the spectrum of human diseases and conditions affecting society. The NCATS approach is based on enabling the work of science teams that include government agencies, private-sector companies, patient advocacy groups, and other members of the scientific community. With a core value of collaborative team science culture, NCATS’s strategic plan outlines goals for collaboration and partnerships in each of its four
___________________
6 For more information, see http://ncats.nih.gov/about/about-translational-science and http://ncats.nih.gov/about/about-translational-science/spectrum
programmatic pillars. And in 2022, the CHIPS and Science Act, signed into law by President Joe Biden, affirmed this federal commitment by establishing the NSF Technology, Innovation and Partnerships (TIP) Directorate, the first new directorate in 30 years (Afful & Meiksin, 2022). TIP programs advance national competitiveness and strengthen societal impact by supporting partnerships between and among teams of researchers, practitioners, and users, enabling the co-creation of scientific discovery and thus closing the gap between discovery and societal impact (Glasgow & Emmons, 2007; Huebschmann et al., 2019).
Translational science teams face many of the same challenges of any science team and, as such, team leaders can prioritize team-building (Begerowski et al., 2021) and development (Stokols, 2010) using the best practices outlined elsewhere in this chapter.
A well-documented challenge in public health research is the limited attention to replicating studies to explore generalizability of public health interventions across contexts (Huebschmann et al., 2019). Public health researchers have prioritized internal validity over external validity, thus limiting the ability to translate research into practice.
Science teams oriented toward research translation are described using a variety of terms. Research translation can include components of community-based research, participatory research, community engagement, empowerment evaluation, participatory or community-based action research, and engaged research, all of which partner scientist and nonscientist team members. Not surprisingly, including nonscientist members on science teams has varied success in producing outcomes that are readily applicable to the target community. Mercer et al. (2008) reliability tested an evaluation tool for measuring the effectiveness of community-based participatory research. Their review of the literature revealed five best practices for effective engagement with community members, including forming an advisory board, establishing research agreements that outline roles and responsibilities, leveraging group facilitation techniques and designated facilitators, holding regular meetings to keep team members engaged, and hiring team members from the community. While Mercer et al. (2008) focused specifically on mechanisms for facilitating research translation, these best practices are used in other contexts as well.
Certain best practices, including collaborative technologies and team leadership, transcend any one phase of a science team. As such, relevant evidence for these two best practices follows, separate from the development, conceptualization, implementation, and translation phases but pertinent to all four.
With the advent of advanced information and communication technologies and the proliferation of their adoption that accompanied the COVID-19 lockdowns, the ways science teams are collaborating have significantly changed. As noted in Chapter 2, teams can operate entirely virtually, entirely in person, or as a hybrid team. Hybrid teams can be characterized by three key dimensions: geographic distribution, temporal dynamics, and communication richness. Geographic distribution refers to how team members are physically dispersed across different locations, ranging from fully collocated teams to those in which members work in entirely separate geographic areas (Handke et al., 2024). Temporal dynamics concern the timing and coordination of work, particularly how team members’ schedules and time zones align or differ (Handke et al., 2024). Finally, communication richness refers to the capacity of the communication medium to carry information (Kirkman & Mathieu, 2005).
It is crucial to recognize that hybrid teams can take on various configurations depending on these characteristics. For instance, a science team studying climate change may have field researchers collecting data in remote locations while others are in laboratory settings analyzing the data in real time. This team might have some members who meet in person periodically for intensive problem-solving, while the rest of the team participates remotely from different time zones, coordinating work through collaborative platforms. Another example could be an interdisciplinary research team in which experimental biologists work in a laboratory, while computational modelers collaborate from other institutions, contributing to the project through cloud-based data sharing platforms and virtual meetings. For teams that have a large component team distance, referring to large geographical, cultural, functional, or disciplinary distance, these complexities can be particularly relevant. Virtual multiteam systems that are geographically dispersed, have communication barriers, and work in different time zones can struggle to navigate these configurations (Ingersoll et al., 2024). Developing communication norms, holding in-person meetings, and creating cross-team coordination roles could potentially help bridge these divides.
As virtual collaboration tools have evolved to facilitate science teamwork, it is important to consider the team’s specific configuration and how this configuration may change over time when assessing virtual collaboration tools’ use and effectiveness (e.g., Gibson et al., 2022). Compared with in-person teams, virtual and hybrid teams have different challenges when it comes to achieving team effectiveness, many of which are mediated by or dependent on virtual collaboration tools (Brucks & Levav, 2022; Handke et al., 2024; Purvanova & Kenda, 2022). When applied thoughtfully, virtual collaboration tools can help enhance communication, foster collaboration across geographic and temporal boundaries, and support flexible, adaptable, and efficient team functioning, regardless of how the team is configured.
When adopting, configuring, or implementing a technological tool, it is crucial to consider how team members may interact with the tool to best facilitate team dynamics. As suggested above, while these tools offer several benefits, the success of their implementation depends largely on how well they align with team members’ skills, needs, and comfort levels (e.g., Larson & DeChurch, 2020). Misalignment can lead to inefficiencies and frustration, mitigating the likelihood that these tools will facilitate scientific collaboration, and potentially contributing to suboptimal virtual team outcomes (e.g., Larson & DeChurch, 2020; Waizenegger et al., 2020).
One key factor to consider is team members’ familiarity and comfortability with technology. While many constructs and scales are available (Holcomb et al., 2004; Martínez-Córcoles et al., 2017; Mason et al., 2014; Merritt et al., 2013; Montag et al., 2023; Sindermann et al., 2021; Sinkovics et al., 2002), the shared premise across these constructs is understanding an individual’s comfortability and, consequently, propensity to engage with technologies. Individual dispositions toward technology, as well as learned or situational factors, can play a critical role in how effectively people adopt and use these technologies within the team (Hoff & Bashir, 2015). When team members are comfortable with the technology, they are more likely to engage with it fully, taking advantage of its features to enhance team communication and coordination (e.g., Colbert et al., 2016; Kilcullen et al., 2022). On the other hand, a lack of familiarity can create barriers, preventing team members from adopting new tools or leading to inefficient use of existing ones.
For example, in a science team working across multiple institutions, unfamiliarity with new collaboration software can create significant challenges. If team members struggle to use key features such as document editing or data management, this can lead to miscommunication about research
findings, delays in decision-making, or even missed opportunities for collaboration. Instead of facilitating effective communication and collaboration, the software becomes a barrier, requiring extra time to troubleshoot. This disruption can detract from the team’s focus on critical scientific tasks, leading to frustration and negatively impacting overall team performance and morale.
Thus, it is critical for team members to feel comfortable using technological tools. This can be achieved through targeted training sessions that familiarize members with the tools they will be using and allow them to practice in a low-pressure environment. Exposure to technological tools can facilitate their use of these technologies and decrease anxiety surrounding them (e.g., Sherrill et al., 2022). That is, training not only helps individuals gain competence but also promotes collective confidence in using the tools effectively.
Another crucial consideration when selecting and using technological tools is their accessibility for virtual and hybrid work. While many tools claim to support seamless remote collaboration, not all are accessible to all team members, particularly team members with disabilities7 (Doush et al., 2023; Hersh et al., 2024). For instance, video conferencing software that lacks closed captioning can present barriers for D/deaf or hard-of-hearing team members. Similarly, tools with complex interfaces or high bandwidth requirements may exclude team members with limited technical capabilities or unreliable internet connections. Another accessibility challenge comes to bear when considering those who are blind or have low vision. Tools that are incompatible with screen readers, for example, can make it difficult for blind or low-vision team members to contribute effectively (Leporini et al., 2023). In a related vein, tools that require extensive use of a computer mouse may not be suitable for people with different physical abilities (e.g., Marchant et al., 2005; Trewin & Pain, 1999). As such, ensuring that all tools are accessible to all team members—showing consideration for their individual and differing needs and enabling their participation without placing undue burden—is a key component of fostering an effective team environment.
Technological tools often include a wide range of features—such as screen sharing, instant messaging, hand raising, and file sharing—designed to facilitate communication and collaboration. While these are useful,
___________________
7 The committee acknowledges that identity-related terminology is complex, personal, and continuously evolving, and that the language used to refer to disabled communities (i.e., person-first vs. identity-first) differs by community and by individual.
without clear norms and expectations about how such functions will be used, teams may experience confusion, inefficiency, or frustration in their collaboration (Kilcullen et al., 2022).
For example, teams may end up violating data privacy agreements when norms surrounding these technologies are not established. Consider a laboratory using a multifunction project management platform that includes features for chat, video conferencing, file sharing, and data storage. Although the technology is convenient and could technically handle file sharing, it may not meet the data protection standards required for the sensitive research the team is conducting (see Chapter 4). Without established file sharing norms, team members could share sensitive files through this platform, inadvertently violating data privacy regulations.
Relatedly, establishing clear norms around the use of these tools is essential for promoting effective collaboration for everyone on the team (e.g., Gibson et al., 2014; Kirkman et al., 2002; Kirkman & Stoverink, 2021). This is particularly important in the context of team science, which frequently involves researchers from different disciplines, cultures, and language abilities (especially for non-native speakers of a primary team language) who bring unique skills and knowledge to a project. Virtual collaboration tools, such as speech-to-text features, can help foster communication and understanding by bridging language barriers or assisting team members who are D/deaf or hard of hearing. For D/deaf and hard-of-hearing team members, these tools can be tremendously beneficial in facilitating participation (e.g., Alshawabkeh et al., 2021; Ang et al., 2022). However, it is important to note that while automated speech-to-text technologies can be helpful, their use in transcripts is not considered compliant with the Americans with Disabilities Act (2010) if the content differs from the audio-only content; teams ought to be mindful of such limitations when setting expectations for accessibility. While speech-to-text technologies may work sufficiently to increase participation ease for D/deaf and hard-of-hearing team members, they are not as effective as live interpreters, who can more effectively convey aspects such as tone and nuance (Secară & Perez, 2022).
Beyond immediate accessibility, these tools can integrate different viewpoints by providing structured platforms for communication, data sharing, and collaborative problem-solving. The flexibility these tools offer can allow geospatially distributed team members to meet synchronously, and contribute asynchronously, making it possible for teams who may not have had the opportunity otherwise to work together effectively (Meluso et al., 2022). Importantly, it is not just the tools themselves that can foster this, but the thoughtful design, implementation, and norms that guide their use. By co-creating and adopting norms, the team can ensure that everyone has an equal opportunity to contribute to the team.
Data security and use is a critical aspect of information technology, and data sharing is a unique challenge for team science.
Addressing these barriers requires a nuanced understanding of how data security practices can inadvertently exclude certain groups or reinforce existing inequities, and many teams may be ill-equipped and may not understand or anticipate these issues when building teams (e.g., Law, 2023). Governance structures that include different perspectives are more likely to develop comprehensive security policies that address the needs of the population (Grindstaff & Mascarenhas, 2019). Data security protocols often assume a one-size-fits-all approach that may not consider the cultural and linguistic differences of users. For example, security warnings and protocols that are not localized or adapted to different languages and cultural contexts may lead to misunderstandings and noncompliance, posing a significant barrier. Data security measures, while crucial, often enhance complexity in system design, which can disproportionately affect those with disabilities or those who are less technologically proficient. For instance, complex authentication processes can be a hurdle for users with cognitive disabilities or who are blind or have low vision. Studies have highlighted the need for adaptive technologies that comply with security standards while also being accessible, ensuring that security enhancements do not hinder usability (Wentz et al., 2011).
The development of security technologies, such as biometric authentication systems, has raised concerns about built-in biases. Studies have shown, for instance, that facial recognition technologies have lower accuracy rates for women and people of color than for White males (Buolamwini & Gebru, 2018). This raises concerns and questions about the fairness of security measures. Some groups often face greater risks of surveillance and privacy violations, which can deter their participation in digital platforms where their data might be insecure (Bacchini & Lorusso, 2019). Ensuring that data security measures protect all users is crucial, particularly for populations who may be disproportionately affected by data gathering and data breaches (Goldshtein et al., 2024).
In addition to these concerns, there are several emerging questions regarding technology and data use, and the ways in which they may function as a barrier to team inclusion. For example, AI technology is increasingly included in many personnel decisions, such as applicant screening. Some evidence suggests that algorithms may introduce biases, and additional research is needed to understand the full effects of possible biases impacts on teams (Albaroudi et al., 2024; Tilmes, 2022). Furthermore, in instances where demographic data are involved, careful consideration can be taken into the security of these data, particularly when their public disclosure could cause the individual harm (Calabro, 2018). This can be applicable for demographic
data in relation to both research subjects and the researchers themselves. To address internal issues, institutions can provide clear definitions of harassment and equip offices, such as offices of research or general council, to raise awareness of policies regarding reporting, investigations, and remediation of harassment claims. Some universities have implemented policies and provided resources to support researchers who are victims of doxing, or the publication of private or identifying information on the internet (e.g., Columbia University’s Resources to Assist Ater Online Targeting/Doxing,8 University of Illinois Urbana-Champaign’s Trolling and Doxxing Attacks on Scholars — Executive Officer Action9). Professional societies like the American Association for University Professors have also issued guidance for faculty members who have been targeted by online harassment.10 Notably, some of the forementioned resources for addressing online harassment may only be effective for addressing internal threats, and external threats may need to be addressed differently.
Advanced data security measures and data use can be resource-intensive, requiring modern infrastructure and sophisticated hardware or software that may not be accessible in underresourced schools, small community hospitals, low-income areas, or partners from low- and middle-income countries (Onoja & Ajala, 2022). This digital divide can prevent individuals in these areas from accessing secure services, thereby secure, yet affordable, technology solutions are essential to bridge this gap.
By addressing the specific barriers that data security can present, institutions can create a more secure, equitable, and inclusive digital environment. Effective data policies may include:
___________________
8 See https://universitylife.columbia.edu/doxing-resources
9 See https://provost.illinois.edu/faculty-affairs/faculty-resources/trolling-attacks-on-scholars-executive-officer-action/
10 See https://www.aaup.org/issues/targeted-harassment/what-you-can-do-about-targeted-online-harassment
Implementing physical security policies and protocols is also critical to protecting the safety of team members. Security risks can extend beyond threats to personnel to also include damage to physical spaces, equipment, and information technology systems. Institutions participating in team science, as well as science teams themselves, could benefit from being aware of and prepared for potential security issues that can arise, particularly when their research focuses on unpopular, politicized, or controversial topics. For example, some scientists working on projects related to COVID-19 have faced harassment and instances of physical violence (National Academies of Sciences, Engineering, and Medicine, 2023; Nogrady, 2021). Additionally, researchers who work with animal subjects have long been the target of extreme attacks, including bombings and arson (Collier, 2014). Along with the risks to the physical security of researchers, researchers who work on sensitive topics or are the targets of perceived or actual threats may also have additional needs to maintain psychological safety (Paterson et al., 1999; Williamson & Burns, 2014).
The National Research Council has previously issued guidance that research institutions and science teams can use to protect their physical safety (National Research Council, 2011). For institutions, this can include the use of security systems and the control of access to facilities, including through locks and access cards. Science teams can encourage individual members to increase their situational awareness and report suspicious behavior. They can also provide all members with training on what to do if a security emergency arises and avenues for reporting any incidents. Additionally, teams may need to take care in sharing scientific results when there are concerns regarding content and venue (Beechey, 2024). Teams can create dissemination and communications plans and data management plans to safely share results, which may include actions such as avoiding social media and anonymizing data.
Institutions have also issued crisis toolkits to provide resources to targeted individuals. For example, the University of Massachusetts Amherst’s Academic Freedom Crisis Toolkit provides resources and outlines the responsibilities of both faculty and administrators in responding to a security crisis.11 This stresses that the onus of responsibility lies not only with faculty and team members, but also with the institutions.
Finally, it is important to note that security can pose a particular challenge for team science, as science teams can include community members who may not have access to or knowledge of institutional resources. Multiinstitutional teams can also face challenges with differences in institutional
___________________
11 See https://www.umass.edu/faculty-development/resources/academic-freedom-crisis-toolkit
practices and protocols, making team communication at the outset of projects important in deciding what practices and protocols will be used and creating a cohesive strategy for how the team will respond to security issues.
Importantly, virtual, hybrid, and in-person teams rarely remain in the same configuration over time. For instance, a group initially composed of principal investigators at the same university might expand to include international researchers or transition from mostly in-person meetings to fully remote interactions. As teams evolve over time, it is necessary for the ways technological tools are used to also evolve to accommodate new needs and challenges. As team compositions and structures change, the affordances of technological tools (e.g., video conferencing software, shared databases, project management platforms) may need to be revisited. Affordances refers to the potential actions or functions that a tool offers based on how it is used (e.g., Gibson et al., 2022). For instance, a team may have decided initially to use a tool for ad hoc communication but subsequently realized a need for handling more complex coordination tasks, such as scheduling across time zones or managing larger datasets.
As team configurations evolve, it becomes essential to revisit the accessibility norms and expectations that were initially set for the use of these technologies. A tool that worked well for a small, collocated group may no longer be effective for a dispersed or growing team. Similarly, accessibility needs may shift as team members from different regions or with different technological proficiencies join. Regularly reviewing these norms and conducting additional training when necessary ensures that all team members remain on the same page and can fully leverage the tools at their disposal.
Leadership has been a major focus of research in the organizational sciences for the past century (Carton, 2022; Yammarino et al., 2005; Yukl et al., 2002). However, empirical research on leadership in science teams specifically is relatively limited. Therefore, this section draws primarily from the broader leadership literature, especially the literature on team leadership (e.g., Zaccaro et al., 2001), to discuss leadership for science teams. When applicable, we incorporate findings from studies of science team leadership. The committee emphasizes the need for further research investigating science team leadership, particularly research focused on how to develop and support science team leaders.
Leadership is broadly defined as “the process of influencing others to understand and agree about what needs to be done and how to do it, and the
process of facilitating individual and collective efforts to accomplish shared objectives” (Yukl, 2006, p. 8). Whether this influence process emerges and proves effective depends on multiple factors, including the characteristics of leaders and followers and their interactions, as well as situational elements such as timing, group history, goals, and the scope of leadership activities (e.g., within a single team vs. across multiple interdependent teams). Moreover, leadership in science teams might be exerted by people who occupy formal positions of authority (e.g., principal investigators) or informally, by people without official leader positions. Leadership in science teams is often shared, distributed, or rotated over time, particularly when teams are large and include multiple co–principal investigators.
Team leadership can be understood through the lens of functional leadership theory, which argues that leadership effectiveness is the extent to which leaders and leadership processes support team effectiveness (Zaccaro et al., 2001). Team effectiveness is multifaceted and includes (a) team dynamics, or how effectively team processes and emergent states contribute to performance while fostering team learning and future collaborations; (b) the degree to which the team’s performance outputs meet or exceed performance expectations; and (c) the extent to which individual members benefit personally from being part of the team (see also Chapter 5). Thus, effective leadership in science teams refers to how well leaders and leadership processes support these core aspects of science team effectiveness. Notably, it is important to consider team leadership as a dynamic process that plays a critical role throughout the team life cycle. While the team performs activities related to achieving its goals, leaders can perform functions such as managing team boundaries, challenging the team, and providing resources (Morgeson et al., 2010).
Effective team leaders prioritize supporting team dynamics (Zaccaro et al., 2001). To do so, leaders can use the best practices discussed in this chapter to establish key enabling conditions for team effectiveness (Hackman, 2012): a real team, a compelling purpose, the right people, clear norms of conduct, a supportive organizational environment, and positive leadership styles such as coaching.
The first enabling condition, a real team, is an intact social system with clear boundaries that distinguish members from nonmembers. Team members work interdependently toward shared goals, with collective accountability for outcomes. Science team leaders can use tools such as team charters, team debriefs, and team communication, and they can clarify team membership and patterns of interdependence.
The second condition is a compelling purpose that inspires and intellectually simulates the team. A well-defined purpose can motivate team members, align their efforts, and engage their talents. Establishing a clear purpose early on is crucial, as it influences team structure and determines the necessary
organizational support. Science team leaders could articulate and reinforce a compelling purpose during initial team-building activities and throughout all phases of the team’s life cycle.
Third, leaders need to ensure the team is made up of the right people. This means carefully selecting members based on their skills and suitability for collaboration and ensuring that team members sufficiently understand one another’s areas of expertise. To this end, leaders could leverage best practices related to team assembly and carefully consider aspects of team composition.
Fourth, leaders can establish and reinforce clear norms of conduct through regular communication, meetings, feedback, and debriefs. Setting clear behavioral expectations minimizes the need for managing team member behavior and allows the team to focus on performance. These norms also encourage continuous evaluation of the team’s environment and the adoption of appropriate strategies. Leaders who articulate high-performance goals, provide constructive feedback, and model effective strategies help support the team’s affective and motivational states, such as trust, cohesion, psychological safety, and collective efficacy. Furthermore, leaders maintain the emotional climate by keeping conflicts task-focused and managing stress during high-pressure situations. Leaders also guide team coordination processes, ensuring that actions and resources are synchronized. For example, leaders can facilitate the integration of individual contributions and establish communication norms that foster flexibility and adaptability in dynamic environments.
Fifth, leaders can help ensure that the team has necessary resources and that the organizational environment remains supportive of team activities by engaging in boundary-spanning activities with the external environment. Boundary-spanning activities might include securing resources and information, building external relationships, advocating for the team, transferring knowledge, or coordinating with other teams (Marrone, 2010; Marrone et al., 2007). As articulated in Chapter 4, support from the broader scientific ecosystem can be essential for team science success.
Lastly, effective leaders provide team coaching and adopt positive leadership styles, such as transformational leadership (Hall et al., 2018) and inclusive leadership. Transformational leadership, characterized by communicating an inspiring vision, encoring innovation, and promoting personal and collective growth, has been linked to enhanced team performance and creativity (Bass, 1999; Schaubroeck et al., 2007; Wang et al., 2011). Inclusive leadership behaviors including taking time to learn about and act upon the strengths, needs, and preferences of each member of the team; inviting different points of views on a topic; and practicing humility and limiting power dynamics, such as through shared leadership and decision-making (Nishii & Leroy, 2022; Roberson & Perry, 2021; Shore &
Chung, 2021). Inclusive leadership promotes psychological safety, which in turn leads to greater risk-taking and innovation (Brasier et al., 2023b). Team coaching, particularly when conducted by formal leaders, can have a positive effect on team processes and performance (Bisbey et al., 2021b; Shuffler et al., 2018; Traylor et al, 2020). Coaching behaviors can evolve as the team progresses, shifting from inspiring vision during development to offering moral support, problem-solving, or coordinating resources during implementation (Hackman & Wageman, 2005; Reich et al., 2009).
In conclusion, effective leadership is crucial for the success of science teams, particularly in fostering team dynamics and ensuring the achievement of shared goals. According to the broader literature on team leadership, science team leaders need to strive to create the enabling conditions necessary for team effectiveness. This involves establishing a real team with a compelling goal, assembling the right mix of team members, setting and reinforcing behavioral norms, and maintaining a supportive emotional climate. This also involves engaging in boundary-spanning activities (e.g., securing resources, aligning the team with the external environment) and exhibiting positive leadership behavioral styles (e.g., transformational and inclusive leadership, team coaching). These leadership practices are important throughout the life cycle of the team. Given the unique demands of science teams, future research on effective leadership in science teams will be essential for improving their collaborative efforts.
This chapter has outlined best practices for supporting science teams, situating them within the framework of the four stages of transdisciplinary research (Hall et al., 2012) and highlighting cross-cutting practices. Although the best practices discussed in this chapter are widely recognized as important for enhancing team science effectiveness, the committee emphasizes several key caveats. First, not all best practices will be suitable for every team or scenario. Science teams are variable in their structures, composition, and purposes—ranging from small, focused groups working on narrow and specialized problems to large cross-disciplinary teams tackling broad, complex issues. Moreover, the ideal timing and implementation of a best practice can vary depending on many factors, including team size, purpose, goals, challenges, or level of virtuality. It is important for teams to carefully assess their unique context and needs to determine which practices will be most beneficial and when they need to be applied.
The committee’s second key caveat is that the empirical evidence base for team science best practices is still evolving. Although the broader literature on teamwork has identified key strategies that science team leaders, facilitators, or members could apply to enhance team effectiveness, the
specific application of these practices in science teams requires further investigation. Some of the best practices discussed in this chapter, such as team charters and team debriefs, are well established and supported by a strong empirical foundation (e.g., Courtright et al., 2017; Mathieu & Rapp, 2009), whereas other best practices are based on the committee’s expert judgment. When possible, the committee recommends best practices with empirical evidence. However, when this evidence does not exist, the committee recommends best practices based on committee and other expert experience. Moreover, with regard to the best practices with a stronger empirical foundation, much of the research has been conducted in organizational contexts outside of scientific collaboration. Since science teams share many essential characteristics with teams outside of science, the best practices identified in the broader literature on team effectiveness are likely to be relevant within the context of team science. However, science teams may also exhibit unique characteristics that are relatively underexplored in the broader literature on team effectiveness. To the extent that science teams differ from other types of teams, some best practices drawn from the general teamwork literature may not be fully applicable. This creates an evidence gap specific to team science that highlights the need for further research on team science best practices and development of tailored approaches for optimizing science teams.
The critical research questions in Table 3-1 are intended to encourage research on the best practices discussed in this chapter. Although not exhaustive, these questions provide researchers with a foundation for exploring unanswered areas within the science of team science. Table 3-2 presents a summary of the best practices described in this chapter, for easy reference.
TABLE 3-1 Research Questions for Understanding Effective Science Teams
| Research Questions | |
| Development | |
| Team assembly |
What are the most important characteristics (beyond expertise) for selecting science team members to maximize collaboration and scientific output? How do team members’ orientations toward teamwork and interdisciplinary science influence the development of attitudes (e.g., trust, cohesion) in newly formed science teams? How do structured onboarding processes (e.g., tailored team charters, mentorship programs) affect team science attitudes (e.g., psychological safety), behaviors (e.g., communication across disciplines), cognition (e.g., knowledge integration, role clarity, shared mental models), and productivity? Do these vary by experience or seniority (e.g., new graduate students, postdoctoral researchers) within established science teams? |
| Team onboarding and building |
How do onboarding practices contribute to fostering a sense of belonging and engagement among members of science teams? How do belonging, engagement, and psychological safety affect team processes and outcomes? At what stage in the team’s developmental life cycle do team-building activities contribute most to the success of science team success? How do nontask team-building activities (e.g., social events such as meals) affect team processes and outcomes? How do task-based team-building activities (e.g., writing retreats) or team-based initiatives (e.g., conflict resolution workshops) affect team processes and outcomes? Of the above task and nontask team-building activities, how do they differentially influence emergent states such as psychological safety, trust, and cohesion within science teams? |
| Team charter |
To what extent do onboarding strategies, such as the use of team charters, enhance alignment and mutual understanding during the initial stages of science team collaboration? Which components of a team charter (e.g., roles and responsibilities, communication protocols, conflict resolution, team goals) have the greatest impact on the long-term success of science teams? How does the inclusion of a detailed conflict resolution process in a team charter influence the ability of science teams to handle disagreements and maintain productivity? |
| Research Questions | |
| Shared goals and understanding |
What are the key factors that facilitate the development of shared mental models in newly formed science teams, and how do these factors vary across interdisciplinary and single-discipline teams? How do cognitive process interventions (e.g., systems mapping, boundary objects) affect knowledge and behavioral coordination? How does the process of collaborative goal formulation and the negotiation of task interdependence during the early stages of team formation contribute to the emergence of shared mental models? How does the frequency and quality of team communication influence the development of transactive memory systems in science teams? What are the challenges stemming from differences in expertise that prevent the science team from developing shared cognitive states (e.g., shared mental models, transactive memory systems)? |
| Conceptualization | |
| Team communication |
How does the quality of team communication (e.g., clarity, openness, frequency) influence the effectiveness of scientific collaboration in interdisciplinary science teams? How do individual and organizational cultural and disciplinary differences affect communication patterns within interdisciplinary science teams, and what strategies are most effective in overcoming communication barriers? How does long-term communication scaffolding between senior and junior team members (e.g., graduate students, postdoctoral researchers) foster the development of expertise and support the gradual integration of new knowledge into the team’s research output? |
| Project design |
How does the use of boundary objects or physical artifacts influence knowledge coordination (e.g., integrative team planning, project design) in science teams? How does practicing epistemic humility among team members influence team processes (e.g., concept integration) and project outcomes (e.g., novel ideas) in interdisciplinary collaborations? What strategies are most effective in promoting even participation among team members in larger or heterogeneous science teams? |
| Implementation | |
| Team meetings |
How does the frequency and structure of team meetings influence knowledge coordination (e.g., shared mental models) and behavioral coordination (e.g., participative decision-making) in science teams? How can team meetings best be leveraged to ensure that the methods and policies that set team norms and collaborative dynamics remain relevant as the team evolves? What meeting strategies (e.g., turn-taking) promote team learning and integration of different perspectives in science teams? |
| Research Questions | |
| Project management |
Does real-time resource-sharing (e.g., data, software, documents) impact science team collaborative processes such that the team works better together and is more efficient in developing shared cognitive states (e.g., shared mental models)? What is the best strategy for employing regular team check-ins, where science team members discuss successes, failures, and unknowns, for virtual and hybrid teams to facilitate the development of team learning? What type of training is needed to facilitate the use of project management software in science teams to ensure that it is the most effective? |
| Team debriefs |
How does the use of structured team debriefs impact the long-term performance and learning outcomes of science teams, particularly in high-complexity or interdisciplinary projects? What role does psychological safety play in enhancing the effectiveness of team debriefs in science teams, and how does it affect the willingness of members to share mistakes and areas for improvement? To what extent should text or video recordings be included in team debriefs and to what extent does their use enhance feedback accuracy and increase the acceptance of feedback among science team members? |
| Translation | |
| Open science |
How does the implementation of open science practices (e.g., sharing data and methodologies) influence the speed and breadth of research translation into societal benefits? How does the use of open science practices foster interdisciplinary collaboration, and how does this collaboration contribute to more effective translation of research findings across different sectors? |
| Other |
What strategies can science teams use to ensure that their findings are replicable and applicable across different societal, cultural, and geographic settings (i.e., external validity)? How does the focus on external validity during the research process impact the development of team dynamics and the eventual translation of scientific findings? At what stage is it best to include individuals directly impacted by the scientific research into the collaborative process to best facilitate the translation of findings? What communication strategies are most effective for ensuring scientific findings are understood and utilized by nonacademic partners, such as policymakers and community leaders? How does early planning for research translation influence the long-term success of translational efforts, and what are the key factors that contribute to effective planning across different phases of research? To what degree do translations goals need to be incorporated into ongoing team evaluations to best ensure that research findings are effectively moved into policy and practice? |
| Research Questions | |
| Leadership | |
| Enabling conditions |
How do science team leaders establish the enabling conditions for team effectiveness in a cross-disciplinary team context, and how quickly can these conditions be established? Can the responsibility for setting and reinforcing behavioral norms be distributed among all members of a science team, rather than resting solely on the formal leader, and can this shared responsibility be established immediately after team formation, or does it need to develop over time? |
| Boundary-spanning |
Which boundary-spanning behaviors are most critical for team success throughout the scientific process, and does the importance of these behaviors change depending on the team’s current scientific phase? To what extent will leaders engage in boundary-spanning activities to align team goals with the broader scientific ecosystem, and how transparent will they be with their team about these efforts? |
| Shared leadership |
How does shared or distributed leadership influence team dynamics, accountability, and effectiveness in large science teams with multiple co–principal investigators? How does the rotating or shared leadership model in science teams affect leadership effectiveness, and what best practices can be developed to support this structure? |
| Collaborative Technologies | |
| Team configuration |
How does the geographic distribution of science team members in hybrid and virtual science teams affect the use and effectiveness of virtual collaboration tools for sharing knowledge-related research materials? To what degree are cultural and language-based barriers to scientific collaborations in international science teams mitigated when using technological tools? |
| Technology–team interaction |
What impact does exposure to and familiarity with virtual collaboration tools have on reducing anxiety and enhancing productivity in interdisciplinary scientific collaborations? To what degree does establishing norms for using virtual collaboration tools influence the quality of communication, engagement in effective team processes, and development of important emergent states in hybrid and virtual science teams? |
| Tool accessibility |
How does the accessibility of virtual collaboration tools, particularly for scientists with disabilities, affect their participation and contribution to team science projects? What are the potential risks of data breaches or violations of privacy when science teams use virtual collaboration tools, and how can these risks be mitigated? |
| Adapting to changing configurations | What critical events determine when the use of technological tools needs to be revisited (e.g., adding new team members, changes in geographic distribution)? |
TABLE 3-2 Summary of Best Practices for Science Teams
| Best Practice | |
| Development | |
| Team assembly |
|
| Team onboarding and building |
|
| Developing a shared language |
|
| Conceptualization | |
| Team charters |
|
| Team planning and project design |
|
| Implementation | |
| Project management |
|
| Team debriefs |
|
| Translation | |
| Connecting with community members |
|
| Best Practice | |
| Cross-Cutting | |
| Technology |
|
| Team leadership |
|
Conclusion 3-1: The following strategies offer the potential to improve science team performance and outcomes, if adapted to specific contexts and circumstances.
Recommendation 3-1: Research funders, including the National Science Foundation, the National Institutes of Health, and the many other agencies and foundations that support research, should provide resources enabling the study of team development, conceptualization, implementation, and translation.
Aaron, J. R., McDowell, W. C., & Herdman, A. O. (2014). The effects of a team charter on student team behaviors. Journal of Education for Business, 89(2), 90–97. https://doi.org/10.1080/08832323.2013.763753
Abay, N. C., Zhou, Y., Kantarcioglu, M., Thuraisingham, B., & Sweeney, L. (2019). Privacy preserving synthetic data release using deep learning. In Machine learning and knowledge discovery in databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18 (pp. 510–526). Springer International Publishing.
Abrantes, A. C., Passos, A. M., Cunha, M. P. E., & Santos, C. M. (2022). Getting the knack for team-improvised adaptation: The role of reflexivity and team mental model similarity. The Journal of Applied Behavioral Science, 58(2), 281–315. https://doi.org/10.1177/00218863211009344
Adler, P. S., & Chen, C. X. (2011). Combining creativity and control: Understanding individual motivation in large-scale collaborative creativity. Accounting, Organizations and Society, 36(2), 63–85. https://doi.org/https://doi.org/10.1016/j.aos.2011.02.002
Afful, H. Q., & Meiksin, J. (2022). US National Science Foundation opens technology directorate. MRS Bulletin, 47(11), 1074–1076. https://doi.org/10.1557/s43577-022-00450-y
Aksnes, D. W., Piro, F. N., & Rørstad, K. (2019). Gender gaps in international research collaboration: A bibliometric approach. Scientometrics, 120, 747–774.
Albaroudi, E., Mansouri, T., & Alameer, A. (2024). A comprehensive review of AI techniques for addressing algorithmic bias in job hiring. AI, 5(1), 383–404.
Allen, J. A., & Rogelberg, S. G. (2013). Manager-led group meetings: A context for promoting employee engagement. Group & Organization Management, 38(5), 543–569. https://doi.org/10.1177/1059601113503040
Allen, J. A., Reiter-Palmon, R., Crowe, J., & Scott, C. (2018). Debriefs: Teams learning from doing in context. American Psychologist, 73(4), 504–516. https://doi.org/10.1037/amp0000246
Allen, J. A., Sands, S. J., Mueller, S. L., Frear, K. A., Mudd, M., & Rogelberg, S. G. (2012). Employees’ feelings about more meetings: An overt analysis and recommendations for improving meetings. Psychology Faculty Publications, 95. https://digitalcommons.unomaha.edu/psychfacpub/95
Alshawabkeh, A. A., Woolsey, M. L., & Kharbat, F. F. (2021). Using online information technology for deaf students during COVID-19: A closer look from experience. Heliyon, 7(5), e06915. https://doi.org/10.1016/j.heliyon.2021.e06915
Americans with Disabilities Act. (2010). 2010 ADA Standards for Accessible Design. https://www.ada.gov/law-and-regs/design-standards/2010-stds/_
Ang, J. R. X., Liu, P., McDonnell, E., & Coppola, S. (2022). “In this online environment, we’re limited”: Exploring inclusive video conferencing design for signers. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA. https://doi.org/10.1145/3491102.3517488
Arslan, C., Chang, E. H., Chilazi, S., Bohnet, I., & Hauser, O. P. (2025). Behaviorally designed training leads to more diverse hiring. Science, 387(6732), 364–366. https://doi.org/doi:10.1126/science.ads5258
Arthur Jr, W., Edwards, B. D., Bell, S. T., Villado, A. J., & Bennett Jr, W. (2005). Team task analysis: Identifying tasks and jobs that are team based. Human Factors, 47(3), 654–669. https://doi.org/10.1518/001872005774860087
Asencio, R., Carter, D. R., DeChurch, L. A., Zaccaro, S. J., & Fiore, S. M. (2012). Charting a course for collaboration: A multiteam perspective. Translational Behavioral Medicine, 2(4), 487–494. https://doi.org/10.1007/s13142-012-0170-3
Audi, R. (2010). Epistemology: A contemporary introduction to the theory of knowledge (3rd ed.). Routledge.
Bacchini, F., & Lorusso, L. (2019). Race, again: How face recognition technology reinforces racial discrimination. Journal of Information, Communication and Ethics in Society, 17(3), 321–335.
Bammer, G. (2013). Disciplining interdisciplinarity: Integration and implementation sciences for researching complex real-world problems. ANU Press.
Barrick, M. R., Stewart, G. L., Neubert, M. J., & Mount, M. K. (1998). Relating member ability and personality to work-team processes and team effectiveness. Journal of Applied Psychology, 83(3), 377–391. https://doi.org/10.1037/0021-9010.83.3.377
Bass, B. M. (1999). Two decades of research and development in transformational leadership. European Journal of Work and Organizational Psychology, 8(1), 9–32. https://doi.org/10.1080/135943299398410
Beechey, K. (2024). How can researchers share sensitive data openly? Gates Open Research. https://blog.gatesopenresearch.org/2024/02/22/how-can-researchers-share-sensitive-data-openly/
Begerowski, S. R., Traylor, A. M., Shuffler, M. L., & Salas, E. (2021). An integrative review and practical guide to team development interventions for translational science teams: One size does not fit all. Journal of Clinical and Translational Science, 5(1), e198. https://doi.org/10.1017/cts.2021.832
Behar-Horenstein, L. S., & Prikhidko, A. (2017). Exploring mentoring in the context of team science. Mentor Tutoring, 25(4), 430–454. https://doi.org/10.1080/13611267.2017.1403579
Bell, S. T. (2007). Deep-level composition variables as predictors of team performance: A meta-analysis. Journal of Applied Psychology, 92(3), 595–615.
Bell, S. T., Brown, S. G., Colaneri, A., & Outland, N. (2018). Team composition and the ABCs of teamwork. American Psychologist, 73(4), 349–362.
Bell, S. T., Villado, A. J., Lukasik, M. A., Belau, L., & Briggs, A. L. (2011). Getting specific about demographic diversity variable and team performance relationships: A meta-analysis. Journal of Management, 37(3), 709–743. https://doi.org/10.1177/0149206310365001
Bennett, L. M., & Gadlin, H. (2012). Collaboration and team science: From theory to practice. Journal of Investigative Medicine, 60(5), 768–775. https://doi.org/10.2310/JIM.0b013e318250871d
Bennett, L. M., Gadlin, H., & Marchand, C. (2018). Collaboration team science: Field guide. U.S. Department of Health & Human Services and National Institutes of Health, National Cancer Institute.
Bennett, L. M., Maraia, R., & Gadlin, H. (2014). The ‘welcome letter’: A useful tool for laboratories and teams. Journal of Translational Medicine & Epidemiology, 2(2), 1035.
Bens, I. (2012). Facilitating with ease! Core skills for facilitators, team leaders and members, managers, consultants, and trainers (3rd ed). Jossey-Bass.
Berengüí, R., Carralero, R., Castejón, M. A., Campos-Salinas, J. A., & Cantón, E. (2021). Values, motivational orientation and team cohesion amongst youth soccer players. International Journal of Sports Science & Coaching, 17(5), 1049–1058. https://doi.org/10.1177/17479541211055690
Berntzen, M., Stray, V., & Moe, N. B. (2021). Coordination strategies: Managing inter-team coordination challenges in large-scale agile. Agile Processes in Software Engineering and Extreme Programming.
Bietz, M. J., Abrams, S., Cooper, D. M., Stevens, K. R., Puga, F., Patel, D. I., Olson, G. M., & Olson, J. S. (2012). Improving the odds through the Collaboration Success Wizard. Translational Behavioral Medicine, 2(4), 480–486. https://doi.org/10.1007/s13142-012-0174-z
Bisbey, T., Traylor, A., & Salas, E. (2021a). Transforming teams of experts into expert teams: Eight principles of expert team performance. Journal of Expertise, 4(2), 190–207.
Bisbey, T. M., Wooten, K. C., Salazar Campo, M., Lant, T. K., & Salas, E. (2021b). Implementing an evidence-based competency model for science team training and evaluation: TeamMAPPS. Journal of Clinical and Translational Science, 5(1), e142. https://doi.org/10.1017/cts.2021.795
Boix Mansilla, V. (2006). Assessing expert interdisciplinary work at the frontier: An empirical exploration. Research Evaluation, 15(1), 17–29. https://doi.org/10.3152/147154406781776075
___. (2017). Interdisciplinary learning: A cognitive-epistemological foundation. In R. Frodeman (Ed.), The Oxford handbook of interdisciplinary (2nd ed., pp. 261–275) https://doi.org/10.1093/oxfordhb/9780198733522.013.22
Braithwaite, J., Churruca, K., Long, J. C., Ellis, L. A., & Herkes, J. (2018). When complexity science meets implementation science: A theoretical and empirical analysis of systems change. BMC Medicine, 16, 1–14.
Brasier, A. R., Burnside, E. S., & Rolland, B. (2023a). Competencies supporting high-performance translational teams: A review of the SciTS evidence base. Journal of Clinical and Translational Science, 7(1), e62. https://doi.org/10.1017/cts.2023.17
Brasier, A. R., Casey, S. L., Hatfield, P., Kelly, P. W., Sweeney, W. A., Schweizer, M., Liu, B., & Burnside, E. S. (2023b). A leadership model supporting maturation of high-performance translational teams. Journal of Clinical and Translational Science, 7(1), e171. https://doi.org/10.1017/cts.2023.598
Braun, F., & Avital, M. (2006). The role of accountability in motivating knowledge sharing among team members in information technology projects. In Americas Conference on Information Systems 2006 proceedings (p. 454). AIS Electronic Library.
Brister, E. (2016). Disciplinary capture and epistemological obstacles to interdisciplinary research: Lessons from central African conservation disputes. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 56, 82–91. https://doi.org/10.1016/j.shpsc.2015.11.001
Brownell, S. E., & Tanner, K. D. (2017). Barriers to faculty pedagogical change: Lack of training, time, incentives, and…tensions with professional identity? CBE—Life Sciences Education, 11(4), 339–346. https://doi.org/10.1187/cbe.12-09-0163
Brucks, M. S., & Levav, J. (2022). Virtual communication curbs creative idea generation. Nature, 605(7908), 108–112. https://doi.org/10.1038/s41586-022-04643-y
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Proceedings of Machine Learning Research. https://proceedings.mlr.press/v81/buolamwini18a.html
Byrd, J. T., & Luthy, M. R. (2010). Improving group dynamics: Creating a team charter. Academy of Educational Leadership Journal, 14(1), 13–26.
Cady, S. H., Michelle, B., & Parker, N. (2018). When a team is more like a group: Improving individual motivation by managing integrity through team action processes. Public Integrity, 21(1), 86–103. https://doi.org/10.1080/10999922.2017.1419052
Calabro, S. M. (2018). From the message board to the front door: Addressing the offline consequences of race- and gender-based doxxing and swatting. Suffolk University Law Review, 55.
Cannon-Bowers, J. A., & Salas, E. (1998). Individual and team decision making under stress: Theoretical underpinnings. In J. A. Cannon-Bowers & E. Salas (Eds.), Making decisions under stress: Implications for individual and team training (pp. 17–38). American Psychological Association. https://doi.org/10.1037/10278-001
Cannon-Bowers, J. A., Tannenbaum, S. I., Salas, E., & Volpe, C. E. (1995). Defining competencies and establishing team training requirements. In R. Guzzo, E. Salas, & Associates (Eds.), Team effectiveness and decision making in organizations (pp. 333–380). Jossey-Bass.
Carter, D. R., Asencio, R., Trainer, H. M., DeChurch, L. A., Kanfer, R., & Zaccaro, S. J. (2019). Best practices for researchers working in multiteam systems. In K. Hall, A. Vogel, & R. Croyle (Eds.), Strategies for team science success: Handbook of evidence-based principles for cross-disciplinary science and practical lessons learned from health researchers (pp. 391–400). https://doi.org/10.1007/978-3-030-20992-6_29
Carter, N. T., Carter, D. R., & DeChurch, L. A. (2018). Implications of observability for the theory and measurement of emergent team phenomena. Journal of Management, 44(4), 1398–1425. https://doi.org/10.1177/0149206315609402
Carton, A. M. (2022). The science of leadership: A theoretical model and research agenda. Annual Review of Organizational Psychology and Organizational Behavior, 9, 61–93.
Cason, M., Sessions, L. C., Nemeth, L., Catchpole, K., & Bundy, D. G. (2020). Components of team science—What contributes to success? Journal of Interprofessional Education & Practice, 18, 100298. https://doi.org/10.1016/j.xjep.2019.100298
Castillo, F., Pearson, Y., Frizell, S., Skaggs, S., & Bisbey, T. (2024). Understanding organizational cultural influences in multisector multi-team systems. Paper presented at 2024 ASEE Annual Conference & Exposition, Portland, Oregon. https://doi.org/10.18260/1-2--48193
Chen, C., Xing, Z., & Liu, Y. (2017). By the community & for the community: A deep learning approach to assist collaborative editing in Q&A sites. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW), 1–21.
Chen, J., Bamberger, P. A., Song, Y., & Vashdi, D. R. (2018). The effects of team reflexivity on psychological well-being in manufacturing teams. Journal of Applied Psychology, 103(4), 443–462.
Colbert, A., Yee, N., & George, G. (2016). The digital workforce and the workplace of the future. Academy of Management Journal, 59(3), 731–739. https://doi.org/10.5465/amj.2016.4003
Collier, L. (2014). Defending animal research. Monitor on Psychology, 45(7), 40. https://www.apa.org/monitor/2014/07-08/defending-research
Corbacho, A. M., Minini, L., Pereyra, M., González-Fernández, A. E., Echániz, R., Repetto, L., Cruz, P., Fernández-Damonte, V., Lorieto, A., & Basile, M. (2021). Interdisciplinary higher education with a focus on academic motivation and teamwork diversity. International Journal of Educational Research Open, 2, Article 100062. https://doi.org/10.1016/j.ijedro.2021.100062
Courtright, S. H., McCormick, B. W., Mistry, S., & Wang, J. (2017). Quality charters or quality members? A control theory perspective on team charters and team performance. Journal of Applied Psychology, 102(10), 1462–1470. https://doi.org/10.1037/apl0000229
Cravens, A. E., Jones, M. S., Ngai, C., Zarestky, J., & Love, H. B. (2022). Science facilitation: Navigating the intersection of intellectual and interpersonal expertise in scientific collaboration. Humanities and Social Sciences Communications, 9, Article 256. https://doi.org/10.1057/s41599-022-01217-1
Dastin, J. (2018). Insight — Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-922showed-bias-against-women-idUSKCN1MK08G/
De Jong, B. A., Dirks, K. T., & Gillespie, N. (2016). Trust and team performance: A meta-analysis of main effects, moderators, and covariates. Journal of Applied Psychology, 101(8), 1134–1150. https://doi.org/10.1037/apl0000110
Dening, L., Xiaoyan, L., & Shuyi, W. (2022). A literature review of diffusion of responsibility phenomenon. Proceedings of the 2022 8th International Conference on Humanities and Social Science Research.
Department of the Army. (1993). A leaders guide to after-action reviews (Training Circular No. 25–20).
DeRue, D. S., Nahrgang, J. D., Hollenbeck, J. R., & Workman, K. (2012). A quasi-experimental study of after-event reviews and leadership development. Journal of Applied Psychology, 97, 997–1015. http://dx.doi.org/10.1037/a0028244
Doush, I. A., Al-Jarrah, A., Alajarmeh, N., & Alnfiai, M. (2023). Learning features and accessibility limitations of video conferencing applications: Are people with visual impairment left behind. Universal Access in the Information Society, 22(4), 1353–1368. https://doi.org/10.1007/s10209-022-00917-4
Duff, J. P., Morse, K. J., Seelandt, J., Gross, I. T., Lydston, M., Sargeant, J., Dieckmann, P., Allen, J. A., Rudolph, J. W., & Kolbe, M. (2024). Debriefing methods for simulation in healthcare: A systematic review. Simulation in Healthcare, 19(1S), S112–S121.
Eddy, E. R., Tannenbaum, S. I., & Mathieu, J. E. (2013). Helping teams to help themselves: Comparing two team-led debriefing methods. Personnel Psychology, 66(4), 975–1008. https://doi.org/10.1111/peps.12041
Edmondson, A. C., & Lei, Z. (2014). Psychological safety: The history, renaissance, and future of an interpersonal construct. Annual Review of Organizational Psychology and Organizational Behavior, 1, 23–43. https://doi.org/https://doi.org/10.1146/annurev-orgpsych-031413-091305
Egeland, T., & Schei, V. (2015). “Cut me some slack”: The psychological contracts as a foundation for understanding team charters. The Journal of Applied Behavioral Science, 51(4), 451–478. https://doi.org/10.1177/0021886314566075
Egeland T., Schei, V., & Tjølsen, Ø. A. (2017). Expecting the unexpected: Using team charters to handle disruptions and facilitate team performance. Group Dynamics: Theory, Research, and Practice, 21(1), 53–59. http://dx.doi.org/10.1037/gdn0000059
Falcone, M., Loughead, J., & Lerman, C. (2019). The integration of research from diverse fields: Transdisciplinary approaches bridging behavioral research, cognitive neuroscience, pharmacology, and genetics to reduce cancer risk behavior. In K. L. Hall, A. L. Vogel, & R. T. Croyle (Eds.), Strategies for team science success: Handbook of evidence-based principles for cross-disciplinary science and practical lessons learned from health researchers (pp. 69–80). Springer, Cham.
Faraj, S., & Sproull, L. (2000). Coordinating expertise in software development teams. Management Science, 46(12), 1554–1568. https://doi.org/10.1287/mnsc.46.12.1554.12072
Fiore, S. M., & Schooler, J. W. (2004). Process mapping and shared cognition: Teamwork and the development of shared problem models. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding the factors that drive process and performance (pp. 133–152). American Psychological Association. https://doi.org/10.1037/10690-007
Fiore, S. M., & Wiltshire, T. J. (2016). Technology as teammate: Examining the role of external cognition in support of team cognitive processes. Frontiers in Psychology, 7, 1531.
Fiore, S. M., Carter, D. R., & Asencio, R. (2015). Conflict, trust, and cohesion: Examining affective and attitudinal factors in science teams. In E. Salas, A. X. Estrada, & W. B. Vessey (Eds.), Team cohesion: Advances in psychological theory, methods and practice (pp. 271–301). Emerald Group Publishing Limited.
Fiore, S. M., Gabelica, C., Wiltshire, T. J., & Stokols, D. (2019). Training to be a (team) scientist. In K. L., Hall, A. L. Vogel, & R. T. Croyle (Eds.), Strategies for team science success: Handbook of evidence-based principles for cross-disciplinary science and practical lessons learned from health researchers (pp. 421–444). Springer, Cham. https://doi.org/10.1007/978-3-030-20992-6_33
Fiore, S. M., Rosen, M. A., Smith-Jentsch, K. A., Salas, E., Letsky, M., & Warner, N. (2010). Toward an understanding of macrocognition in teams: Predicting processes in complex collaborative contexts. Human Factors, 52(2), 203–224.
Forscher, P. S., Wagenmakers, E.-J., Coles, N. A., Silan, M. A., Dutra, N., Basnight-Brown, D., & IJzerman, H. (2023). The benefits, barriers, and risks of big-team science. Perspectives on Psychological Science, 18(3), 607–623. https://doi.org/10.1177/17456916221082970
Fox, C. W., Ritchey, J. P., & Paine, C. T. (2018). Patterns of authorship in ecology and evolution: First, last, and corresponding authorship vary with gender and geography. Ecology and Evolution, 8(23), 11492–11507.
Frazier, M. L., Fainshmidt, S., Klinger, R. L., Pezeshkan, A., & Vracheva, V. (2017). Psychological safety: A meta-analytic review and extension. Personnel Psychology, 70(1), 113–165. https://doi.org/10.1111/peps.12183
Gaffney, S. G., Ad, O., Smaga, S., Schepartz, A., & Townsend, J. P. (2019). GEM-NET: Lessons in multi-institution teamwork using collaboration software. ACS Central Science, 5(7), 1159–1169. https://doi.org/10.1021/acscentsci.9b00111
Gehlert, S., Carothers, B. J., Lee, J. A., Gill, J., Luke, D., & Colditz, G. (2015). A social network analysis approach to diagnosing and improving the functioning of transdisciplinary teams in public health. Transdisciplinary Journal of Engineering & Science, 6, 11–22. https://doi.org/10.22545/2015/00070
Gibson, C. B., Dunlop, P. D., Majchrzak, A., & Chia, T. (2022). Sustaining effectiveness in global teams: The coevolution of knowledge management activities and technology affordances. Organization Science, 33(3), 1018–1048. https://doi.org/10.1287/orsc.2021.1478
Gibson, C. B., Huang, L., Kirkman, B. L., & Shapiro, D. L. (2014). Where global and virtual meet: The value of examining the intersection of these elements in twenty-first-century teams. Annual Review of Organizational Psychology and Organizational Behavior, 1, 217–244. https://doi.org/10.1146/annurev-orgpsych-031413-091240
Ginns, P., Prosser, M., & Barrie, S. (2007). Students’ perceptions of teaching quality in higher education: The perspective of currently enrolled students. Studies in Higher Education, 32(5), 603–615. https://doi.org/10.1080/03075070701573773
Glasgow, R. E., & Emmons, K. M. (2007). How can we increase translation of research into practice? Types of evidence needed. Annual Review of Public Health, 28(1), 413–433.
Goldshtein, M., Chiou, E. K., & Roscoe, R. D. (2024). ‘I just don’t trust them’: Reasons for distrust and non-disclosure in demographic questionnaires for individuals in STEM. Societies, 14(7), 105. https://doi.org/10.3390/soc14070105
Gosselin, D. C., Thompson, K., Pennington, D., & Vincent, S. (2020). Learning to be an interdisciplinary researcher: Incorporating training about dispositional and epistemological differences into graduate student environmental science teams. Journal of Environmental Studies and Sciences, 10, 310–326. https://doi.org/10.1007/s13412-020-00605-w
Graef, D., J., Kramer, J. G., & Motzer, N. (2021a). Facilitating interdisciplinary meetings: A practical guide. National Socio-Environmental Synthesis Center. https://www.sesync.org/sites/default/files/2022-01/Facilitating%20Interdisciplinary%20Meetings%20Guide.pdf
Graef, D. J., Motzer, N., & Kramer, J. G. (2021b). The value of facilitation in interdisciplinary socio-environmental team research. Socio-Ecological Practice Research, 3(2), 109–113. https://doi.org/10.1007/s42532-021-00082-7
Graff, D., & Clark, M.A. (2018). Clear as a bell: The influence of analogies on the development of cross-understanding in design teams. Team Performance Management, 24(7/8), 396–410. https://doi.org/10.1108/TPM-04-2018-0028
Griffin, A. S., & Guez, D. (2014). Innovation and problem solving: A review of common mechanisms. Behavioural Processes, 109, 121–134. https://doi.org/https://doi.org/10.1016/j.beproc.2014.08.027
Grindstaff, K., & Mascarenhas, M. (2019). “No one wants to believe it”: Manifestations of white privilege in a STEM-focused college. Multicultural Perspectives, 21(2), 102–111. https://doi.org/10.1080/15210960.2019.1572487
Guise, J. M., Nagel, J. D., & Regensteiner, J. G. (2012). Best practices and pearls in interdisciplinary mentoring from Building Interdisciplinary Research Careers in Women’s Health Directors. Journal of Women’s Health, 21(11), 1114–1127. https://doi.org/10.1089/jwh.2012.3788
Guise, J. M., Winter, S., Fiore, S. M., Regensteiner, J. G., & Nagel, J. (2017). Organizational and training factors that promote team science: A qualitative analysis and application of theory to the NIH’s BIRCWH career development program. Journal of Clinical and Translational Science, 1(2), 101–107. https://doi.org/10.1017/cts.2016.17
Hackman, J. R. (2012). From causes to conditions in group research. Journal of Organizational Behavior, 33(3), 428–444. https://doi.org/10.1002/job.1774
Hackman, J. R., & Wageman, R. (2005). A theory of team coaching. The Academy of Management Review, 30(2), 269–287. https://doi.org/10.5465/amr.2005.16387885
Hall, K. L., Stokols, D., Moser, R. P., Taylor, B. K., Thornquist, M. D., Nebeling, L. C., Ehret, C. C., Barnett, M. J., McTiernan, A., Berger, N. A., Goran, M. I., & Jeffery, R. W. (2008). The collaboration readiness of transdisciplinary research teams and centers: Findings from the National Cancer Institute’s TREC year-one evaluation study. American Journal of Preventive Medicine, 35(2), S161–S172. https://doi.org/10.1016/j.amepre.2008.03.035
Hall, K. L., Vogel, A. L., & Crowston, K. (2019). Comprehensive collaboration plans: Practical considerations spanning across individual collaborators to institutional supports. In K. L. Hall, A. L. Vogel, & R. T. Croyle (Eds.), Strategies for team science success: Handbook of evidence-based principles for cross-disciplinary science and practical lessons learned from health researchers (pp. 587–612). Springer, Cham. https://doi.org/10.1007/978-3-030-20992-6
Hall, K. L., Vogel, A. L., Huang, G. C., Serrano, K. J., Rice, E. L., Tsakraklides, S. P., & Fiore, S. M. (2018). The science of team science: A review of the empirical evidence and research gaps on collaboration in science. American Psychologist, 73(4), 532–548. https://doi.org/10.1037/amp0000319
Hall, K. L., Vogel, A. L., Stipelman, B. A., Stokols, D., Morgan, G., & Gehlert, S. (2012). A four-phase model of transdisciplinary team-based research: Goals, team processes, and strategies. Translational Behavioral Medicine, 2(4), 415–430. https://doi.org/10.1007/s13142-012-0167-y
Handke, L., Aldana, A., Costa, P. L., & O’Neill, T. A. (2024). Hybrid teamwork: What we know and where we can go from here. Small Group Research, 55(5), 805–835. https://doi.org/10.1177/10464964241279078
Harvey, J.-F., Bresman, H., Edmondson, A. C., & Pisano, G. P. (2022). A strategic view of team learning in organizations. Academy of Management Annals, 16(2). https://doi.org/10.5465/annals.2020.0352
Hawk, L. W., Murphy, T. F., Hartmann, K. E., Burnett, A., & Maguin, E. (2024). A randomized controlled trial of a team science intervention to enhance collaboration readiness and behavior among early career scholars in the Clinical and Translational Science Award network. Journal of Clinical and Translational Science, 8(1), e6. https://doi.org/10.1017/cts.2023.692
Henson, V. R., Cobourn, K. M., Weathers, K. C., Carey, C. C., Farrell, K. J., Klug, J. L., Sorice, M. G., Ward, N. K., & Weng, W. (2020). A practical guide for managing interdisciplinary teams: Lessons learned from coupled natural and human systems research. Social Sciences, 9(7), 119. https://doi.org/10.3390/socsci9070119
Hersh, M., Leporini, B., & Buzzi, M. (2024). A comparative study of disabled people’s experiences with the video conferencing tools Zoom, MS Teams, Google Meet and Skype. Behaviour & Information Technology, 43(15), 1–20. https://doi.org/10.1080/0144929X.2023.2286533
Hoever, I. J., van Knippenberg, D., van Ginkel, W. P., & Barkema, H. G. (2012). Fostering team creativity: Perspective taking as key to unlocking diversity’s potential. Journal of Applied Psychology, 97(5), 982–996. https://doi.org/10.1037/a0029159
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
Holcomb, L. B., King, F. B., & Brown, S. W. (2004). Student traits and attributes contributing to success in online courses: Evaluation of university online courses. The Journal of Interactive Online Learning, 2(3), 1–17.
Horwitz, S. K., & Horwitz, I. B. (2007). The effects of team diversity on team outcomes: A meta-analytic review of team demography. Journal of Management, 33(6), 987–1015. https://doi.org/10.1177/0149206307308587
Huang, J., Gates, A. J., Sinatra, R., & Barabási, A. L. (2020). Historical comparison of gender inequality in scientific careers across countries and disciplines. Proceedings of the National Academy of Sciences, 117(9), 4609–4616.
Hubbs, G., O’Rourke, M., & Orzack, S. H. (Eds.). (2020). The Toolbox Dialogue Initiative: The power of cross-disciplinary practice. CRC Press.
Huebschmann, A. G., Leavitt, I. M., & Glasgow, R. E. (2019). Making health research matter: A call to increase attention to external validity. Annual Review of Public Health, 40(1), 45–63.
Hutchins, E. (1999). Cognitive artifacts. In F. C. Keil (Ed.), MIT encyclopedia of the cognitive sciences. MIT Press.
Ingersoll, B., Espinel, A., Nauman, J., Broder-Fingert, S., Carter, A. S., Sheldrick, R. C., Stone, W. L., & Wainer, A. L. (2024). Using virtual multiteam systems to conduct a multisite randomized clinical trial in the part C early intervention system: Benefits, challenges, and lessons learned. Contemporary Clinical Trials, 143, 107585. https://doi.org/10.1016/j.cct.2024.107585
Jackson, G. (2022). The AI-enhanced future of health care administrative task management. NEJM Catalyst Innovations in Care Delivery.
Jadidi, M., Karimi, F., Lietz, H., & Wagner, C. (2018). Gender disparities in science? Dropout, productivity, collaborations and success of male and female computer scientists. Advances in Complex Systems, 21(03n04), 1750011. https://doi.org/10.1142/S0219525917500114
Jarrahi, M. H., Askay, D., Eshraghi, A., & Smith, P. (2023). Artificial intelligence and knowledge management: A partnership between human and AI. Business Horizons, 66(1), 87–99. https://doi.org/https://doi.org/10.1016/j.bushor.2022.03.002
Jehn, K. A. (1997). A qualitative analysis of conflict types and dimensions in organizational groups. Administrative Science Quarterly, 42(3), 530–557. https://doi.org/10.2307/2393737
Jeske, D., & Olson, D. (2024). Silo mentality in teams: Emergence, repercussions and recommended options for change. Journal of Work-Applied Management.
Johnson, M., Schuster, M., Le, Q. V., Krikun, M., Wu, Y., Chen, Z., Thorat, N., Viégas, F., Wattenberg, M., Corrado, G., Hughes, M., & Dean, J. (2017). Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5, 339–351. https://doi.org/10.1162/tacl_a_00065
Johnson, W. H., Baker, D. S., Dong, L., Taras, V., & Wankel, C. (2022). Do team charters help team-based projects? The effects of team charters on performance and satisfaction in global virtual teams. Academy of Management Learning & Education, 21(2), 236–260. https://doi.org/10.5465/amle.2020.0332
Jones, B. D. (2009). Motivating students to engage in learning: The music model of academic motivation. International Journal of Teaching and Learning in Higher Education, 21(2), 272–285.
Joshi, A., & Roh, H. (2009). The role of context in work team diversity research: A meta-analytic review. Academy of Management Journal, 52(3), 599–627. https://doi.org/10.5465/amj.2009.41331491
Kaner, S. (2014). Facilitator’s guide to participatory decision-making (3rd ed.). Jossey-Bass.
Kasi, V., Keil, M., Mathiassen, L., & Pedersen, K. (2008). The post mortem paradox: A Delphi study of IT specialist perceptions. European Journal of Information Systems, 17(1), 62–78. https://doi.org/10.1057/palgrave.ejis.3000727
Keiser, N. L., & Arthur Jr, W. (2021). A meta-analysis of the effectiveness of the after-action review (or debrief) and factors that influence its effectiveness. Journal of Applied Psychology, 106(7), 1007–1032. https://doi.org/10.1037/apl0000821
___. (2022). A meta-analysis of task and training characteristics that contribute to or attenuate the effectiveness of the after-action review (or debrief). Journal of Business and Psychology, 37(5), 953–976. https://doi.org/10.1007/s10869-021-09784-x
Kilcullen, M., Bisbey, T. M., Rosen, M., & Salas, E. (2023). Does team orientation matter? A state-of-the-science review, meta-analysis, and multilevel framework. Journal of Organizational Behavior, 44(2), 355–375. https://doi.org/10.1002/job.2622
Kilcullen, M., Feitosa, J., & Salas, E. (2022). Insights from the virtual team science: Rapid deployment during COVID-19. Human Factors, 64(8), 1429–1440. https://doi.org/10.1177/0018720821991678
Kirkman, B. L., & Mathieu, J. E. (2005). The dimensions and antecedents of team virtuality. Journal of Management, 31(5), 700–718. https://doi.org/10.1177/0149206305279113
Kirkman, B. L., & Stoverink, A. C. (2021). Building resilient virtual teams. Organizational Dynamics, 50(1), 100825. https://doi.org/10.1016/j.orgdyn.2020.100825
Kirkman, B. L., Rosen, B., Gibson, C. B., Tesluk, P. E., & McPherson, S. O. (2002). Five challenges to virtual team success: Lessons from Sabre, Inc. Academy of Management Perspectives, 16(3), 67–79. https://doi.org/10.5465/ame.2002.8540322
Klein, C., DiazGranados, D., Salas, E., Le, H., Burke, C. S., Lyons, R., & Goodwin, G. F. (2009). Does team building work? Small Group Research, 40(2), 181–222. https://doi.org/10.1177/1046496408328821
Knott, E., Rao, A. H., Summers, K., & Teeger, C. (2022). Interviews in the social sciences. Nature Reviews Methods Primers, 2, Article 73. https://doi.org/10.1038/s43586-022-00150-6
Kolbe, M., Eppich, W., Rudolph, J., Meguerdichian, M., Catena, H., Cripps, A., Grant, V., & Cheng, A. (2020). Managing psychological safety in debriefings: A dynamic balancing act. BMJ Simulation and Technology Enhanced Learning, 6(3), 164–171. https://doi.org/10.1136/bmjstel-2019-000470
Kotarba, J. A., Molldrem, S., Smith, E., Spratt, H., Bhavnani, S. K., Farroni, J. S., & Wooten, K. (2023). Exploring team dynamics during the development of a multi-institutional cross-disciplinary translational team: Implications for potential best practices. Journal of Clinical and Translational Science, 7(1), e220. https://doi.org/10.1017/cts.2023.640
Kozlowski, S. W. J., & Chao, G. T. (2018). Unpacking team process dynamics and emergent phenomena: Challenges, conceptual advances, and innovative methods. American Psychologist, 73(4), 576–592. https://doi.org/10.1037/amp0000245
Kramer, T. J., Fleming, G. P., & Mannis, S. M. (2001). Improving face-to-face brainstorming through modeling and facilitation. Small Group Research, 32(5), 533–557. https://doi.org/10.1177/104649640103200502
Kwiek, M., & Roszka, W. (2021). Gender disparities in international research collaboration: A study of 25,000 university professors. Journal of Economic Surveys, 35(5), 1344–1380.
___. (2022). Are female scientists less inclined to publish alone? The gender solo research gap. Scientometrics, 127(4), 1697–1735.
Lacerenza, C. N., Marlow, S. L., Tannenbaum, S. I., & Salas, E. (2018). Team development interventions: Evidence-based approaches for improving teamwork. American Psychologist, 73(4), 517–531. https://doi.org/10.1037/amp0000295
Larson, L., & DeChurch, L. A. (2020). Leading teams in the digital age: Four perspectives on technology and what they mean for leading teams. The Leadership Quarterly, 31(1), 101377. https://doi.org/10.1016/j.leaqua.2019.101377
Lau, D. C., & Murnighan, J. K. (1998). Demographic diversity and faultlines: The compositional dynamics of organizational groups. The Academy of Management Review, 23(2), 325–340. https://doi.org/10.5465/amr.1998.533229
Law, S. K. (2023). Barriers experienced by individuals with IDD when interacting with digital technology and online content. OCAD University. https://openresearch.ocadu.ca/id/eprint/4126/1/Law_Sandra_2023_MDes_INCD_MRP.pdf
Lee, N. T, Resnick, P., & Barton, G. (2019) Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings Institute. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/
Leporini, B., Buzzi, M., & Hersh, M. (2023). Video conferencing tools: Comparative study of the experiences of screen reader users and the development of more inclusive design guidelines. ACM Transactions on Accessible Computing, 16(1), 1–36. https://doi.org/10.1145/3573012
Levine, J. M., Choi, H.-S., & Moreland, R. L. (2003). Newcomer innovation in work teams. In P. Paulus & B. Nijstad (Eds.), Group creativity: Innovation through collaboration (pp. 202–224). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195147308.003.0010
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W-t., Rocktäschel, T., Riedel, S., & Kiela, D. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. Advances in Neural Information Processing Systems, 33, 9459–9474. https://proceedings.neurips.cc/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf
Liao, W. M., Finley, D. R., & Shafer, W. E. (2004). Effects of responsibility and cohesiveness on group escalation decisions. Advances in Management Accounting, 13, 245–259.
Liu, F., Holme, P., Chiesa, M., AlShebli, B., & Rahwan, T. (2023a). Gender inequality and self-publication are common among academic editors. Nature Human Behaviour, 7(3), 353–364.
Liu, M., Jaiswal, A., Bu, Y., Min, C., Yang, S., Liu, Z., Acuña, D., & Ding, Y. (2022). Team formation and team impact: The balance between team freshness and repeat collaboration. Journal of Informetrics, 16(4), 101337. https://doi.org/10.1016/j.joi.2022.101337
Liu, Y., Song, Y., Trainer, H., Carter, D., Zhou, L., Wang, Z., & Chiang, J. T. (2023b). Feeling negative or positive about fresh blood? Understanding veterans’ affective reactions toward newcomer entry in teams from an affective events perspective. Journal of Applied Psychology, 108(5), 728–749. https://doi.org/10.1037/apl0001044
Love, H. B., Cross, J. E., Fosdick, B., Crooks, K. R., VandeWoude, S., & Fisher, E. R. (2021). Interpersonal relationships drive successful team science: An exemplary case-based study. Humanities and Social Sciences Communications, 8, 106. https://doi.org/10.1057/s41599-021-00789-8
Love, H. B., Fosdick, B. K., Cross, J. E., Suter, M., Egan, D., Tofany, E., & Fisher, E. R. (2022). Towards understanding the characteristics of successful and unsuccessful collaborations: A case-based team science study. Humanities and Social Sciences Communications, 9, Article 371. https://doi.org/10.1057/s41599-022-01388-x
Lungeanu, A., Carter, D. R., DeChurch, L. A., & Contractor, N. S. (2018). How team interlock ecosystems shape the assembly of scientific teams: A hypergraph approach. Computational Methods for Communication Science, 12(2-3), 174–198. https://doi.org/10.1080/19312458.2018.1430756
Lungeanu, A., Huang, Y., & Contractor, N. S. (2014). Understanding the assembly of interdisciplinary teams and its impact on performance. Journal of Informetrics, 8(1), 59–70. https://doi.org/10.1016/j.joi.2013.10.006
Mallinson, T., Lotrecchiano, G. R., Schwartz, L. S., Furniss, J., Leblanc-Beaudoin, T., Lazar, D., & Falk-Krzesinski, H. J. (2016). Pilot analysis of the Motivation Assessment for Team Readiness, Integration, and Collaboration (MATRICx) using Rasch analysis. Journal of Investigative Medicine, 64(7), 1186–1193. https://doi.org/10.1136/jim-2016-000173
Marchant, T. D., Tiernan, T. M., & Mann, W. C. (2005). Computer accessibility issues for older adults with disabilities: A pilot study. Occupational Therapy Journal of Research, 25(2), 55–65. https://doi.org/10.1177/153944920502500203
Marks, M. A., Mathieu, J. E., & Zaccaro, S. J. (2001). A temporally based framework and taxonomy of team processes. The Academy of Management Review, 26(3), 356–376. https://doi.org/10.2307/259182
Marlow, S. L., Lacerenza, C. N., Paoletti, J., Burke, C. S., & Salas, E. (2018). Does team communication represent a one-size-fits-all approach?: A meta-analysis of team communication and performance. Organizational Behavior and Human Decision Processes, 144, 145–170. https://doi.org/10.1016/j.obhdp.2017.08.001
Marrone, J. A. (2010). Team boundary spanning: A multilevel review of past research and proposals for the future. Journal of Management, 36(4), 911–940. https://doi.org/10.1177/0149206309353945
Marrone, J. A., Tesluk, P. E., & Carson, J. B. (2007). A multilevel investigation of antecedents and consequences of team member boundary-spanning behavior. Academy of Management Journal, 50(6), 1423–1439. https://doi.org/10.5465/amj.2007.28225967
Martínez-Córcoles, M., Teichmann, M., & Murdvee, M. (2017). Assessing technophobia and technophilia: Development and validation of a questionnaire. Technology in Society, 51, 183–188. https://doi.org/10.1016/j.techsoc.2017.09.007
Mason, O. J., Stevenson, C., & Freedman, F. (2014). Ever-present threats from information technology: the Cyber-Paranoia and Fear Scale. Frontiers in Psychology, 5, 1298. https://doi.org/10.3389/fpsyg.2014.01298
Mathieu, J. E., & Rapp, T. L. (2009). Laying the foundation for successful team performance trajectories: The roles of team charters and performance strategies. Journal of Applied Psychology, 94(1), 90–103. https://doi.org/10.1037/a0013257
Mathieu, J. E., Wolfson, M. A., & Park, S. (2018). The evolution of work team research since Hawthorne. American Psychologist, 73(4), 308–321.
McCormack, W. T., & Strekalova, Y. A. L. (2021). CTS teams: A new model for translational team training and team science intervention. Journal of Clinical and Translational Science, 5(1), e183. https://doi.org/10.1017/cts.2021.854
McDowell, W. C., Herdman, A. O., & Aaron, J. (2011). Charting the course: The effects of team charters on emergent behavioral norms. Organization Development Journal, 29(1), 79–88.
McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017, April). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics (pp. 1273–1282). PMLR.
Meluso, J., Johnson, S., & Bagrow, J. (2022). Flexible environments for hybrid collaboration: Redesigning virtual work through the four orders of design. Design Issues, 38(1), 55–69. https://doi.org/10.1162/desi_a_00670
Mercer, S. L., Green, L. W., Cargo, M., Potter, M. A., Daniel, M., Olds, S., & Reed-Gross, E. (2008). Appendix C: Reliability-tested guidelines for assessing participatory research projects. In N. W. Meredith Minkler (Ed.), Community-based participatory research for health: From process to outcomes (2nd ed., pp. 407–433). Jossey-Bass.
Merritt, S. M., Heimbaugh, H., LaChapell, J., & Lee, D. (2013). I trust it, but I don’t know why: Effects of implicit attitudes toward automation on trust in an automated system. Human Factors, 55(3), 520–534. https://doi.org/10.1177/0018720812465081
Milliken, F. J., & Martins, L. L. (1996). Searching for common threads: Understanding the multiple effects of diversity in organizational groups. The Academy of Management Review, 21(2), 402–433. https://doi.org/10.5465/amr.1996.9605060217
Misra, S., Harvey, R. H., Stokols, D., Pine, K. H., Fuqua, J., Shokair, S. M., & Whiteley, J. M. (2009). Evaluating an interdisciplinary undergraduate training program in health promotion research. American Journal of Preventive Medicine, 36(4), 358–365. https://doi.org/10.1016/j.amepre.2008.11.014
Misra, S., Stokols, D., & Cheng, L. (2015). The Transdisciplinary Orientation scale: Factor structure and relation to the integrative quality and scope of scientific publications. Journal of Translational Medicine & Epidemiology, 3(2), 1042.
Montag, C., Kraus, J., Baumann, M., & Rozgonjuk, D. (2023). The propensity to trust in (automated) technology mediates the links between technology self-efficacy and fear and acceptance of artificial intelligence. Computers in Human Behavior Reports, 11, Article 100315. https://doi.org/10.1016/j.chbr.2023.100315
Montgomery, B. L., & Page, S. C. (2018). [Mentoring beyond hierarchies: Multi-mentor systems and models]. Paper commissioned by the Committee on Effective Mentoring in STEMM at the National Academies of Sciences, Engineering, and Medicine. https://nap.nationalacademies.org/resource/25568/Montgomery%20and%20Page%20-%20Mentoring.pdf
Montoya, A. C., Carter, D. R., Martin, J., & DeChurch, L. A. (2015). The five perils of team planning: Regularities and remedies. In M. D. Mumford & M. Frese (Eds.), The psychology of planning in organizations (pp. 166–185). Routledge.
Morgan, S. E., Mosser, A., Ahn, S., Harrison, T. R., Wang, J., Huang, Q., Reynolds, A., Mao, B., & Bixby, J. L. (2021). Developing and evaluating a team development intervention to support interdisciplinary teams. Journal of Clinical and Translational Science, 5(1), e166. https://doi.org/10.1017/cts.2021.831
Morgeson, F. P., DeRue, D. S., & Karam, E. P. (2010). Leadership in teams: A functional approach to understanding leadership structures and processes. Journal of Management, 36(1), 5–39.
Mosca, J. B., & Merkle, J. F. (2024). Strategic onboarding: Tailoring Gen Z transition for workplace success. Journal of Business Diversity, 24(1). https://doi.org/10.33423/jbd.v24i1.6852
Mroz, J. E., Allen, J. A., Verhoeven, D. C., & Shuffler, M. L. (2018). Do we really need another meeting? The science of workplace meetings. Current Directions in Psychological Science, 27(6), 484–491. https://doi.org/10.1177/0963721418776307
Nagel, J. D., Koch, A., Guimond, J. M., Galvin, S., & Geller, S. (2013). Building the women’s health research workforce: Fostering interdisciplinary research approaches in women’s health. Global Advances in Health and Medicine, 2(5), 24–29. https://doi.org/10.7453/gahmj.2013.049
National Academies of Sciences, Engineering, and Medicine. (2019). The science of effective mentorship in STEMM. The National Academies Press. https://doi.org/10.17226/25568
___. (2023). Attacks on scientists and health professionals during the pandemic: Proceedings of a symposium in brief. The National Academies Press. https://doi.org/10.17226/26936
National Research Council. (2011). Prudent practices in the laboratory: handling and management of chemical hazards, updated version. The National Academies Press. https://doi.org/10.17226/12654
___. (2015). Enhancing the effectiveness of team science. The National Academies Press. https://doi.org/10.17226/19007
National Science Foundation. (2024). U.S. National Science Foundation research trainee-ship program. https://new.nsf.gov/funding/opportunities/us-national-science-foundation-research-traineeship-program
Newton, O. B., Fiore, S. M., & Song, J. (2019). Expertise and complexity as mediators of knowledge loss in open source software development. In Proceedings of the Human Factors and Ergonomics Society annual meeting (pp. 1580–1584). SAGE Publications.
Nielsen, M. W., Bloch, C. W., & Schiebinger, L. (2018). Making gender diversity work for scientific discovery and innovation. Nature Human Behaviour, 2(10), 726–734. https://doi.org/10.1038/s41562-018-0433-1
Nishii, L. H., & Leroy, H. (2022). A multi-level framework of inclusive leadership in organizations. Group & Organization Management, 47(4), 683–722. https://doi.org/10.1177/10596011221111505
Nogrady, B. (2021). ‘I hope you die’: How the COVID pandemic unleashed attacks on scientists. Nature, 598, 250–253. https://doi.org/10.1038/d41586-021-02741-x
O’Neill, T. A., Allen, N. J., & Hastings, S. E. (2013). Examining the “pros” and “cons” of team conflict: A team-level meta-analysis of task, relationship, and process conflict. Human Performance, 26(3), 236–260. https://doi.org/10.1080/08959285.2013.795573
Onoja, J. P., & Ajala, O. A. (2022). Innovative telecommunications strategies for bridging digital inequities: A framework for empowering underserved communities. GSC Advanced Research and Reviews, 13(01), 210–217.
O’Rourke, M., Crowley, S., Laursen, B., Robinson, B., & Vasko, S.E. (2019). Disciplinary diversity in teams: Integrative approaches from unidisciplinarity to transdisciplinarity. In K. Hall, A. Vogel, & R. Croyle (Eds.), Strategies for team science success. Springer, Cham. https://doi.org/10.1007/978-3-030-20992-6_2
O’Rourke, M., Rinkus, M. A., Cardenas, E., & McLeskey, C. (2023). Communication practice for team science. In D. Gosselin (Ed.), A practical guide for developing cross-disciplinary collaboration skills (pp. 83–102). Springer. https://doi.org/10.1007/978-3-031-37220-9_5
Paletz, S. B. F., Schunn, C. D., & Kim, K. H. (2013). The interplay of conflict and analogy in multidisciplinary teams. Cognition, 126(1), 1–19. https://doi.org/10.1016/j.cognition.2012.07.020
Park, G., Spitzmuller, M., & DeShon, R. P. (2013). Advancing our understanding of team motivation: Integrating conceptual approaches and content areas. Journal of Management, 39(5), 1339–1379. https://doi.org/10.1177/0149206312471389
Parker, P. (2020) The art of gathering: How we meet and why it matters. Riverhead Books.
Paterson, B., Gregory, D., & Thorne, S. (1999). A protocol for researcher safety. Qualitative Health Research, 9(2), 259–269. https://doi.org/10.3399/bjgp14X679480
Pendharkar, P. C., & Rodger, J. A. (2009). The relationship between software development team size and software development cost. Communications of the ACM, 52(1), 141–144. https://doi.org/10.1145/1435417.1435449
Pennington, D. (2016). A conceptual model for knowledge integration in interdisciplinary teams: Orchestrating individual learning and group processes. Journal of Environmental Studies and Sciences, 6, 300–312.
Pennington, D., Bammer, G., Danielson, A., Gosselin, D., Gouvea, J., Habron, G., Hawthorne, D., Parnell, R., Thompson, K., Vincent, S., & Wei, C. (2016). The EMBeRS project: Employing model-based reasoning in socio-environmental synthesis. Journal of Environmental Studies and Sciences, 6, 278–286.
Pennington, D., Vincent, S., Gosselin, D., & Thompson, K. (2021). Learning across disciplines in socio-environmental problem framing. Socio-Environmental Systems Modelling, 3, 17895.
Piso, Z., O’Rourke, M., & Weathers, K. C. (2016). Out of the fog: Catalyzing integrative capacity in interdisciplinary research. Studies in History and Philosophy of Science Part A, 56, 84–94. https://doi.org/10.1016/j.shpsa.2016.01.002
Purvanova, R. K., & Kenda, R. (2022). The impact of virtuality on team effectiveness in organizational and non-organizational teams: A meta-analysis. Applied Psychology, 71(3), 1082–1131. https://doi.org/10.1111/apps.12348
Qudrat-Ullah, H. (2007). Debriefing can reduce misperceptions of feedback: The case of renewable resource management. Simulation & Gaming, 38(3), 382–397. https://doi.org/10.1177/1046878107300669
Reich, Y., Ullmann, G., Van der Loos, M., & Leifer, L. (2009). Coaching product development teams: A conceptual foundation for empirical studies. Research in Engineering Design, 19(4), 205–222. https://doi.org/10.1007/s00163-008-0046-1
Reiter-Palmon, R., Kennel, V., Allen, J. A., Jones, K. J., & Skinner, A. M. (2015). Naturalistic decision making in after-action review meetings: The implementation of and learning from post-fall huddles. Journal of Occupational and Organizational Psychology, 88(2), 322–340. https://doi.org/10.1111/joop.12084
Roberson, Q., & Perry, J. L. (2021). Inclusive leadership in thought and action: A thematic analysis. Group & Organization Management, 47(4), 755–778. https://doi.org/10.1177/10596011211013161
Rodríguez, D. C., Jessani, N. S., Zunt, J., Ardila-Gómez, S., Muwanguzi, P. A., Atanga, S. N., Sunguya, B., Farquhar, C., & Nasuuna, E. (2021). Experiential learning and mentorship in global health leadership programs: Capturing lessons from across the globe. Annals of Global Health, 87(1), 61. https://doi.org/10.5334/aogh.3194
Rogelberg, S. G. (2019). The surprising science of meetings: How you can lead your team to peak performance. Oxford University Press.
Rogelberg, S. G., Leach, D. J., Warr, P. B., & Burnfield, J. L. (2006). “Not another meeting!” Are meeting time demands related to employee well-being? Journal of Applied Psychology, 91(1), 83–96. https://doi.org/10.1037/0021-9010.91.1.83
Rolland, B., Scholl, L., Suryanarayanan, S., Hatfield, P., Judge, K., Sorkness, C., Burnside, E., & Brasier, A. R. (2021a). Operationalization, implementation, and evaluation of collaboration planning: A pilot interventional study of nascent translational teams. Journal of Clinical and Translational Science, 5(1), e23. https://doi.org/10.1017/cts.2020.515
Rolland, B., Hohl, S. D., & Johnson, L. J. (2021b). Enhancing translational team effectiveness: The Wisconsin Interventions in Team Science framework for translating empirically informed strategies into evidence-based interventions. Journal of Clinical and Translational Science, 5(1), e158. https://doi.org/10.1017/cts.2021.825
Rosen, M. A., Salas, E., Wilson, K. A., King, H. B., Salisbury, M., Augenstein, J. S., Robinson, D. W., & Birnbach, D. J. (2008). Measuring team performance in simulation-based training: Adopting best practices for healthcare. Simulation in Healthcare, 3(1), 33–41. https://doi.org/10.1097/SIH.0b013e3181626276
Salas, E., Klein, C., King, H., Salisbury, M., Augenstein, J. S., Birnbach, D. J., Robinson, D. W., & Upshaw, C. (2008). Debriefing medical teams: 12 evidence-based best practices and tips. The Joint Commission Journal on Quality and Patient Safety, 34(9), 518–527. https://doi.org/10.1016/S1553-7250(08)34066-5
Santos, V. R., Soares, A. L., & Carvalho, J. Á. (2012). Knowledge sharing barriers in complex research and development projects: An exploratory study on the perceptions of project managers. Knowledge and Process Management, 19(1), 27–38.
Schaubroeck, J., Lam, S. S. K., & Cha, S. E. (2007). Embracing transformational leadership: Team values and the impact of leader behavior on team performance. Journal of Applied Psychology, 92(4), 1020–1030. https://doi.org/10.1037/0021-9010.92.4.1020
Schmidt, K., & Wagner, I. (2004). Ordering systems: Coordinative practices and artifacts in architectural design and planning. Computer Supported Cooperative Work, 13, 349–408
Schweiger, D. M., Sandberg, W. R., & Rechner, P. L. (1989). Experiential effects of dialectical inquiry, devil’s advocacy and consensus: Approaches to strategic decision making. Academy of Management Journal, 32(4), 745–772. https://doi.org/10.5465/256567
Secară, A., & Perez, E. (2022). Addressing content, technical and collaboration concerns in providing access to the D/deaf and hard of hearing audience: Integrated theatre captioning and theatre sign language interpreting. InTRAlinea: Online Translation Journal, 24.
Shah, D. T., & Fiore, S. M. (2022). Building effective mentoring team using team science competencies. In A. Fornari & D. T. Shah (Eds.), Mentoring in health professions education: Evidence-informed strategies across the continuum (pp. 13–21). Springer, Cham. https://doi.org/10.1007/978-3-030-86935-9
Sherrill, A. M., Wiese, C. W., Abdullah, S., & Arriaga, R. I. (2022). Overcoming clinician technophobia: What we learned from our mass exposure to telehealth during the COVID-19 pandemic. Journal of Technology in Behavioral Science, 7(4), 547–553. https://doi.org/10.1007/s41347-022-00273-3
Shore, L. M., & Chung, B. G. (2021). Inclusive leadership: How leaders sustain or discourage work group inclusion. Group & Organization Management, 47(4), 723–754. https://doi.org/10.1177/1059601121999580
Shuffler, M. L., DiazGranados, D., Maynard, M. T., & Salas, E. (2018). Developing, sustaining, and maximizing team effectiveness: An integrative, dynamic perspective of team development interventions. Academy of Management Annals, 12(2), 688–724. https://doi.org/10.5465/annals.2016.0045
Shuffler, M. L., Jiménez-Rodríguez, M., & Kramer, W. S. (2015). The science of multiteam systems: A review and future research agenda. Small Group Research, 46(6), 659–699. https://doi.org/10.1177/1046496415603455
Sindermann, C., Sha, P., Zhou, M., Wernicke, J., Schmitt, H. S., Li, M., Sariyska, R., Stavrou, M., Becker, B., & Montag, C. (2021). Assessing the attitude towards artificial intelligence: Introduction of a short measure in German, Chinese, and English language. Künstliche Intelligenz, 35, 109–118. https://doi.org/10.1007/s13218-020-00689-0
Sinkovics, R. R., Stöttinger, B., Schlegelmilch, B. B., & Ram, S. (2002). Reluctance to use technology-related products: Development of a technophobia scale. Thunderbird International Business Review, 44(4), 477–494. https://doi.org/10.1002/tie.10033
Smith, A. M., Lai, S. Y., Bea-Taylor, J., Hill, R. B., & Kleinhenz, N. (2016). Collaboration and change in the research networks of five Energy Frontier Research Centers. Research Evaluation, 25(4), 472–485. https://doi.org/10.1093/reseval/rvw006
Smith-Doerr, L., Alegria, S., & Sacco, T. (2017) How diversity matters in the US science and engineering workforce: A critical review considering integration in teams, fields, and organizational contexts. Engaging Science, Technology, and Society, 3(2017). https://doi.org/10.17351/ests2017.142
Smith-Jentsch, K. A., Cannon-Bowers, J. A., Tannenbaum, S. I., & Salas, E. (2008). Guided team self-correction: Impacts on team mental models, processes, and effectiveness. Small Group Research, 39(3), 303–327. https://doi.org/10.1177/1046496408317794
Smith-Jentsch, K. A., & Sierra, M. J. (2023). Houston we have a problem: How debriefing method impacts open communication and the depth of team reflexivity. Journal of Business and Psychology, 38(6), 1211–1232. https://doi.org/10.1007/s10869-023-09912-9
Sourati, J., & Evans, J. A. (2023). Accelerating science with human-aware artificial intelligence. Nature Human Behaviour, 7(10), 1682–1696. https://doi.org/10.1038/s41562-023-01648-z
Specht, A., & Crowston, K. (2022). Interdisciplinary collaboration from diverse science teams can produce significant outcomes. PLoS One, 17(11), e0278043. https://doi.org/10.1371/journal.pone.0278043
Star, S. L. (2010). This is not a boundary object: Reflections on the origin of a concept. Science, Technology, & Human Values, 35(5), 601–617. https://doi.org/10.1177/0162243910377624
Star, S. L., & Griesemer, J. R. (1989). Institutional ecology, `translations’ and boundary objects: Amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social Studies of Science, 19(3), 387–420. https://doi.org/10.1177/030631289019003001
Stasser, G., & Titus, W. (1985). Pooling of unshared information in group decision making: Biased information sampling during discussion. Journal of Personality and Social Psychology, 48(6), 1467–1478
Steiner, J. S., Blum-Barnett, E., Rolland, B., Kraus, C. R., Wainwright, J. V., Bedoy, R., Martinez, Y. T., Alleman, E. R., Eibergen, R., Pieper, L. E., Carroll, N. M., Hixon, B., Sterrett, A., Rendle, K. A., Saia, C., Vachani, A., Ritzwoller, D. P., & Burnett-Hartman, A. (2023). Application of team science best practices to the project management of a large, multi-site lung cancer screening research consortium. Journal of Clinical and Translational Science, 7(1), e145. https://doi.org/10.1017/cts.2023.566
Stewart, V. R., Snyder, D. G., & Kou, C.-Y. (2023). We hold ourselves accountable: A relational view of team accountability. Journal of Business Ethics, 183(3), 691–712. https://doi.org/10.1007/s10551-021-04969-z
Stokols, D. (2010). Cross-disciplinary team science initiatives: Research, training, and translation. In R. Frodemen, J. T. Klein, C. Mitcham, & J. B. Holbrook (Eds.), Oxford handbook of interdisciplinarity. https://escholarship.org/uc/item/4wp2g8dm
Sutton, L., Berdan, L. G., Bolte, J., Califf, R. M., Ginsburg, G. S., Li, J. S., McCall, J., Moen, R., Myers, B. S., Rodriquez, V., Veldman, T., & Boulware, L. E. (2019). Facilitating translational team science: The project leader model. Journal of Clinical and Translational Science, 3(4), 140–146. https://doi.org/10.1017/cts.2019.398
Tannenbaum, S., Fernández Castillo, G., & Salas, E. (2023). How to overcome the nine most common teamwork barriers. Organizational Dynamics, 52, 101006. https://doi.org/10.1016/j.orgdyn.2023.101006
Tannenbaum, S. I., & Cerasoli, C. P. (2013). Do team and individual debriefs enhance performance? A meta-analysis. Human Factors, 55(1), 231–245. https://doi.org/10.1177/0018720812448394
Tannenbaum, S. I., & Greilich, P. E. (2023). The debrief imperative: Building teaming competencies and team effectiveness. BMJ Quality & Safety, 32(3), 125–128. https://doi.org/10.1136/bmjqs-2022-015259
Tebes, J. K., & Thai, N. D. (2018). Interdisciplinary team science and the public: Steps toward a participatory team science. The American Psychologist, 73(4), 549–562. https://doi.org/10.1037/amp0000281
Tesler, R., Mohammed, S., Hamilton, K., Mancuso, V., & McNeese, M. (2018). Mirror, mirror: Guided storytelling and team reflexivity’s influence on team mental models. Small Group Research, 49(3), 267–305. https://doi.org/10.1177/1046496417722025
Thatcher, S. M., Meyer, B., Kim, Y., & Patel, P. C. (2024). A meta-analytic integration of the faultlines literature. Organizational Psychology Review, 14(2), 238–281. https://doi.org/10.1177/20413866231225064
Tilmes, N. (2022). Disability, fairness, and algorithmic bias in AI recruitment. Ethics and Information Technology, 24(2), 21. https://doi.org/10.1007/s10676-022-09633-2
Timóteo, M., Lourenço, E., Brochado, A. C., Domenico, L., da Silva, J., Oliveira, B., Barbosa, R., Montemezzi, P., Mourão, C. F. de A. B., Olej, B., & Alves, G. (2021). Digital management systems in academic health sciences laboratories: A scoping review. Healthcare, 9(6), 739. https://doi.org/10.3390/healthcare9060739
Traylor, A., Stahr, E., & Salas, E. (2020). Team coaching: Three questions and a look ahead: A systematic literature review. International Coaching Psychology Review, 15, 54–67. https://doi.org/10.53841/bpsicpr.2020.15.2.54
Trewin, S., & Pain, H. (1999). Keyboard and mouse errors due to motor disabilities. International Journal of Human-Computer Studies, 50(2), 109–144. https://doi.org/10.1006/ijhc.1998.0238
Tschan, F., Semmer, N. K., Gurtner, A., Bizzari, L., Spychiger, M., Breuer, M., & Marsch, S. U. (2009). Explicit reasoning, confirmation bias, and illusory transactive memory: A simulation study of group medical decision making. Small Group Research, 40(3), 271–300. https://doi.org/10.1177/1046496409332928
Tuckman, B. W. (1965). Developmental sequence in small groups. Psychological Bulletin, 63(6), 384–399. https://doi.org/10.1037/h0022100
Tumilty, E., Spratt, H., Cestone, C., Wooten, K., Aronson, J., Hommel, J., Hellmich, M. R., & Chao, C. (2022). Developing future translational scientists through authentic learning and assessments. International Journal of Educational Research Open, 3, 100151. https://doi.org/10.1016/j.ijedro.2022.100151
Van Hartesveldt, C. J. (2016). Integrative graduate education and research. In W. Bainbridge & M. Roco (Eds.), Handbook of science and technology convergence (pp. 1045–1058). Springer, Cham. https://doi.org/10.1007/978-3-319-07052-0_61
van Knippenberg, D., & Schippers, M. C. (2007). Work group diversity. Annual Review of Psychology, 58(1), 515–541. https://doi.org/10.1146/annurev.psych.58.110405.085546
Vashdi, D. R., Bamberger, P. A., & Erez, M. (2012). Can surgical teams ever learn? The role of coordination, complexity, and transitivity in action team learning. Academy of Management Journal, 56, 945–971. http://dx.doi.org/10.5465/amj.2010.0501
Vela, C., Menchaca, V. D., & Silva, H. (2023). University faculty perceptions of professional development: Impact and effectiveness. Journal of Educational Leadership in Action.
Verwijs, C., & Russo, D. (2024). The double-edged sword of diversity: How diversity, conflict, and psychological safety impact software teams. IEEE Transactions on Software Engineering, 50(1), 141–157. https://doi.org/10.1109/TSE.2023.3339881
Vogel, A., Feng, A., Oh, A., Hall, K., Stipelman, B., Stokols, D., Okamoto, J., Perna, F., Moser, R., & Nebeling, L. (2012). Influence of a National Cancer Institute transdisciplinary research and training initiative on trainees’ transdisciplinary research competencies and scholarly productivity. Translational Behavioral Medicine, 2(4), 459–468. https://doi.org/10.1007/s13142-012-0173-0
Waizenegger, L., McKenna, B., Cai, W., & Bendz, T. (2020). An affordance perspective of team collaboration and enforced working from home during COVID-19. European Journal of Information Systems, 29(4), 429–442. https://doi.org/10.1080/0960085X.2020.1800417
Wallerstein, N., Calhoun, K., Eder, M., Kaplow, J., & Wilkins, C.H. (2019). Engaging the community: Community-based participatory research and team science. In K. Hall, A. Vogel, & R. Croyle (Eds.), Strategies for team science success. Springer, Cham. https://doi.org/10.1007/978-3-030-20992-6_9
Wang, G., Oh, I.-S., Courtright, S., & Colbert, A. (2011). Transformational leadership and performance across criteria and levels: A meta-analytic review of 25 years of research. Group & Organization Management, 36, 223–270. https://doi.org/10.1177/1059601111401017
Wang, J., & Hicks, D. (2015). Scientific teams: Self-assembly, fluidness, and interdependence. Journal of Informetrics, 9(1), 197–207. https://doi.org/https://doi.org/10.1016/j.joi.2014.12.006
Wang, K., & Nickerson, J. V. (2017). A literature review on individual creativity support systems. Computers in Human Behavior, 74, 139–151. https://doi.org/10.1016/j.chb.2017.04.035
Wardale, D. (2013). Towards a model of effective group facilitation. Leadership & Organization Development Journal, 34(2), 112–129. https://doi.org/10.1108/01437731311321896
Webber, S. S., & Donahue, L. M. (2001). Impact of highly and less job-related diversity on work group cohesion and performance: A meta-analysis. Journal of Management, 27(2), 141–162. https://doi.org/10.1016/S0149-2063(00)00093-3
Weingart, L. R., Behfar, K. J., Bendersky, C., Todorova, G., & Jehn, K. A. (2015). The directness and oppositional intensity of conflict expression. The Academy of Management Review, 40(2), 235–262. https://doi.org/10.5465/amr.2013.0124
Wellemeyer, D., & Williams, J. (2019). Onboarding 2.0: Methods of designing and deploying effective onboarding training for academic libraries. Nova Science Publishers, Inc.
Wentz, B., Jaeger, P. T., & Lazar, J. (2011). Retrofitting accessibility: The legal inequality of after-the-fact online access for persons with disabilities in the United States. First Monday, 16(11). https://doi.org/10.5210/fm.v16i11.3666
Whitworth, E., & Biddle, R. (2007). Motivation and cohesion in agile teams. Lecture Notes in Computer Science, 4536. https://doi.org/10.1007/978-3-540-73101-6_9
Wiese, C. W., Burke, C. S., Tang, Y., Hernandez, C., & Howell, R. (2022). Team learning behaviors and performance: A meta-analysis of direct effects and moderators. Group & Organization Management, 47(3), 571–611. https://doi.org/10.1177/10596011211016928
Wiese, C. W., Shuffler, M. L., & Salas, E. (2015). Teamwork and team performance measurement. In J. D. Wright (Ed.), International encyclopedia of the social & behavioral sciences (2nd ed., pp. 96–103). https://doi.org/10.1016/B978-0-08-097086-8.22017-5
Williams, K. Y., & O’Reilly, III, C. A. (1998). Demography and diversity in organizations: A review of 40 years of research. Research in Organizational Behavior, 20, 77–140.
Williamson, A., & Burns, N. (2014). The safety of researchers and participants in primary care qualitative research. British Journal of General Practice, 64(621), 198–200. https://doi.org/10.3399/bjgp14X679480
Wolf, A. V., Hendrick, K. N., Kramer, W. S., & Shuffler, M. L. (2024). More teams, more meetings? Toward an understanding of multiteam system meeting design, facilitation, and effectiveness. Organizational Psychology Review. https://doi.org/10.1177/20413866241264121
Woodside, R., Rosenthal, G., & Olivier, C. (2021). Implementing the innovative academic Learning Health System Scholars (aLHSS) Postdoctoral Training Program (TL1) at Wake Forest University Health Sciences (WFUHS). Journal of Clinical and Translational Science, 5(s1), 132–133. https://doi.org/10.1017/cts.2021.739
Woolley, A. W., Gerbasi, M. E., Chabris, C. F., Kosslyn, S. M., & Hackman, J. R. (2008). Bringing in the experts: How team composition and collaborative planning jointly shape analytic effectiveness. Small Group Research, 39(3), 352–371. https://doi.org/10.1177/1046496408317792
Wróbel, A. E., Lomberg, C., & Cash, P. (2021). Facilitating design: Examining the effects of facilitator’s neutrality on trust and potency in an exploratory experimental study. Design Science, 7(e6). https://doi.org/10.1017/dsj.2021.5
Wu, L., Wang, D., & Evans, J. A. (2019). Large teams develop and small teams disrupt science and technology. Nature, 566(7744), 378–382. https://doi.org/10.1038/s41586-019-0941-9
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316(5827), 1036–1039. https://doi.org/10.1126/science.1136099
Xu, H. (2025). [Exploring the use of artificial intelligence to facilitate team science]. Paper commissioned by the Committee on Research and Application in Team Science at the National Academies of Sciences, Engineering, and Medicine.
Yammarino, F. J., et al. (2005). Leadership and levels of analysis: A state-of-the-science review. The Leadership Quarterly, 16(6), 879–919.
Yang, Y., Tian, T. Y., Woodruff, T. K., Jones, B. F., & Uzzi, B. (2022). Gender-diverse teams produce more novel and higher-impact scientific ideas. Proceedings of the National Academy of Sciences, 119(36), e2200841119. https://doi.org/10.1073/pnas.2200841119
Yukl, G. (2006). Leadership in organizations, 9/e. Pearson India.
Yukl, G., Gordon, A., & Taber, T. (2002). A hierarchical taxonomy of leadership behavior: Integrating a half century of behavior research. Journal of Leadership & Organizational Studies, 9(1), 15–32.
Zaccaro, S. J., Dubrow, S., Torres, E. M., & Campbell, L. N. P. (2020). Multiteam systems: An integrated review and comparison of different forms. Annual Review of Organizational Psychology and Organizational Behavior, 7(1), 479–503. https://doi.org/10.1146/annurev-orgpsych-012119-045418
Zaccaro, S. J., Rittman, A. L., & Marks, M. A. (2001). Team leadership. The Leadership Quarterly, 12(4), 451–483.
Zajac, S., Woods, A., Tannenbaum, S., Salas, E., & Holladay, C. L. (2021). Overcoming challenges to teamwork in healthcare: A team effectiveness framework and evidence-based guidance. Frontiers in Communication, 6, 606445.
Zajdela, E. R., Huynh, K., Feig, A. L., Wiener, R. J., & Abrams, D. M. (2025). Face-to-face or face-to-screen: A quantitative comparison of conferences modalities. PNAS Nexus, 4(1). https://doi.org/10.1093/pnasnexus/pgae522
Zajdela, E. R., Huynh, K., Wen, A. T., Feig, A. L., Wiener, R. J., & Abrams, D. M. (2022). Dynamics of social interaction: Modeling the genesis of scientific collaboration. Physical Review Research, 4(4), L042001. https://doi.org/10.1103/PhysRevResearch.4.L042001