The Role of Competition in Learning. Placing learning participants in competitive environments has long been employed as a tool to motivate their learning (Johnson & Johnson, 1989). A generation of researchers mapping the human brain suggest that the human motivations to acquire and defend while creating bonding relationships with others collaboratively engaged in the same work may be hard-wired within people (Cocchi, et al., 2013; Nohria, et al., 2008). Placing individuals in competitive environments, defined as situations that are goal-oriented, where individuals direct effort towards achieving one’s goals even though there may be a negative effect on others (Hong, et al., 2009), may not be universally beneficial. Research indicates that simply placing people in competition with each other may dampen motivation and performance in certain circumstances, especially when success is not expected (Vallerand, Gauvin, & Halliwell, 1986). Indeed, seminal research indicates that it is the combination of intra-team collaboration and inter-team competition where motivation and performance is often optimized (Tauer & Harackiewicz, 2004), where individuals work within a supportive team environment, competing against other teams.
Past research in competition and performance often involved physical performance, such as shooting a basketball through a hoop (e.g., Tauer & Harackiewicz, 2004). It is only relatively recently, owing in part to the recent rise in emerging communications technologies, that research has begun to focus on the role of competition in the learning environment. For example, the gamification of learning has received increased attention over the past decade with studies finding that elements like points, badges, and leaderboards increased participant attendance and engagement, and decreased the learning gap between participants (Dicheva & Dichev, 2015). Furthermore, a meta-analysis of gamified training and learning outcomes found that elements of challenge and competition had large effects on affective and behavioral knowledge (Sanchez & Van Lysebetten, 2017). As various game elements can differentially affect outcomes, they should be selected with specific goals in mind (Armstrong & Landers, 2018). The theory of gamified learning (Landers, 2014) lays out how the effects of gamification are indirect, always operating through a specific psychological or behavioral change (e.g., increasing extrinsic motivation through badge obtainment). Thus, in determining which gamified elements are beneficial, it is essential to determine the goals and needs of the specific training program, rather than simply adding elements that are in vogue.
For example, when focusing solely on behavioral learning outcomes, the presence of competition (e.g., a leaderboard) can increase the quality of learner performance and skills; however, if motivational outcomes such as self-efficacy and engagement are also of interest than competition augmented by collaborative efforts (e.g., working in a collaborative team against other teams) can improve both behavioral and motivational outcomes (Sailer & Homner, 2020). Recent evidence also suggests that the benefits of placing students in collaborative teams that compete with other teams in course-based games result in increased learning in higher education (Cagiltay, et al., 2015). However, positive results have not been universal; some studies (Bandura & Locke, 2003; Vandercruysse, et al., 2013) echo past researchers who posit that learners should have some degree of self-efficacy and expectation that success is possible for benefits to competition be accrued. Together, this highlights the need for intentional and theoretically grounded application of gamified elements.
Competition in Student Leadership Learning. A recent national study of leadership educators (Jenkins, 2016) suggested that fewer than 2% of higher education-based leadership educators who teach online regularly utilize some form of competition (i.e., debate, scavenger hunts) within their course pedagogies. This finding may not be surprising, given empirical evidence (Owen, 2012) that most leadership programs in higher education teach post-industrial models of leadership, where leaders engage in collaborative, ethical, and value-based relationships with their members to benefit a larger society (Rost, 1993). Supporting students in mastering related leadership practices may not be aligned with having them compete with other teams in zero-sum games, where winners and losers result. However, little research currently exists on the benefits (or costs) of placing leadership students in collaborative teams to compete with other teams, other than a recent study (Rosch & Headrick, 2020) that suggest potential for further study. Our research was designed to investigate the degree to which university studies with various backgrounds, as well as motivation and self-efficacy related to leadership practices, increase their capacity to lead through engaging in a competitive team environment.
Desired Dimensions of Leadership Learning. The definition of “leadership capacity” continues to serve as a relatively contested concept within leadership scholarship (Dugan, 2017). However, some research (e.g., Hannah & Avolio, 2010; Rosch & Collins, 2020) has begun to advocate for a general model in leadership education where the goals of student learning include capacities related to: (a) Post-industrial leadership skill, including both transformational and transactional behaviors and attitudes; (b) leader self-efficacy; and (c) motivation to lead. This combination of capacities, referred to as the “Ready, Willing, and Able” model of leadership (Keating, Rosch, & Burgoon, 2014), represents the central construct within which we evaluated student growth upon within this study. In the model, leadership skill (“Able”) consists of a combination of transformational and transactional skills (Rosch & Collins, 2020) originally developed as part of the transformational leadership theories (Bass & Avolio, 1994) and measured within the Transformational Leader Behaviors scale (Podsakoff, et al., 1990). Leader self-efficacy (“Ready”) refers to a prospective leader’s sense that enacted behaviors would be successful and is related to one’s confidence acting as a leader (Hannah, et al., 2008). Motivation to lead (“Willing”) refers to the psychological press that a someone might possess to start behaving as a leader (Chan & Drasgow, 2001).
A central aspect to this model of leadership learning is that students must develop capacity within all three areas before leadership behavior might manifest in any given context (Rosch & Collins, 2020). Without motivation to engage in the work of leadership, an emerging leader with skill and self-efficacy might decide to opt out of such work. Without a sense that one’s behaviors would be successful, a motivated and skilled student might not choose to volunteer or otherwise engage in leadership.
One dimension of leadership learning that several researchers (e.g., Dugan, 2011; Miscenko, et al., 2017) suggest requires increased attention is longitudinal growth over time. For example, a recent study of leadership-involved college students revealed complex growth curves, not straight lines, in their reported capacity growth over multiple years (Rosch & Collins, 2019). Such results highlight the need to focus on longitudinal data collection and attending to students’ age and the amount of time that they have spent on campus (i.e., their reported class year) as variables to receive attention in leadership research.
The Collegiate Leadership Competition. Founded in 2015, the Collegiate Leadership Competition (CLC) (www.collegiateleader.com) was designed to create a practice field for leadership development (Allen, et al., 2017). Likewise, the experience was designed to create an arena for deliberate practice (Allen, 2018; Allen, et al., 2018) allowing students the opportunity to behave and act as leaders, and then receive immediate feedback on results to be applied to future situations. The CLC employs a curriculum which includes approximately 100 unique leadership concepts, which students are required to master for the team to be successful in the competition (thus implementing a cognitivist dimension). The weekly practice builds skills in problem-solving, navigating difficult conversations, and ethical decision making (a behaviorist dimension), and throughout the experience, there is time set aside for reflection to create new insights (a humanist dimension). Likewise, each team has a formal coach leading the team who serves as a mentor and role model (a social learning dimension). The competition serves as a “crucible moment” or concrete experience where participants have a chance to put their knowledge and skills into action. After the competition, teams spend time reflecting and making meaning of their experience (a constructivist dimension).
Such a focus on practice and “doing” (rather than passive learning while seated, or through discussing theoretical concepts) stems from research (Marsick, et al., 2009; Rabin, 2014) that indicates learning occurs 70% through practice, 20% through formal coaching and mentoring, and only 10% through formal instruction (Noe, 2017). Similarly, one of the most widely used and highly regarded training interventions is Behavior Modeling Training (BMT) (Decker & Nathan, 1985; Taylor, et al., 2005), based on Bandura’s (1977) social learning theory. The training design elements emphasized by BMT include: (a) the need to outline a set of well-defined behaviors/skills that are the focus of the training; (b) providing a model which displays the effective use of the behavior/skill; (c) building in dedicated time to practice the behaviors; (d) including feedback and reinforcement following practice; and (e) attempting to maximize transfer. Thus, BMT-based training requires the individual to display their skill in real-time, where they receive almost immediate feedback. A meta-analysis found that BMT had a significant effect on learning outcomes and a smaller effect on job behavior; however, it increased over time (Taylor et al., 2005).
CLC has adopted elements similar to both the 70-20-10 model and BMT by providing a structure that includes necessary but minimal lecture-style instruction, a heavy emphasis on practice, and the inclusion of immediate and constant coaching and feedback. Furthermore, the philosophy of the CLC is supported by research on experiential learning, which leverages experience and reflection to guide development (Tews, et al., 2017). Experiential learning can include programs like high-ropes courses, field trips, and game-based activities that aim to improve learning by making the training fun. An extension of this is the process of gamification, where elements of game design are included in non-game contexts such as training and education (Lumsden et al., 2016). Gamified elements include narrative fiction, point-scoring, and competition, among others.
Although several of the games, challenges, and activities conducted in CLC practices and competition may appear to lack a connection to leadership on the surface, there is an intentional connection between the curriculum and the activities. For instance, a simple activity such as the “Pringles Ringle” (an activity where students are challenged to create a free-standing vertical ring made solely of Pringles potato chips) directly aligns with CLC’s curriculum around problem-solving, navigating stressors, leadership styles, followership styles, and influence. The concepts and processes outlined in the curriculum are explored and debriefed in various contexts, and their utility tested repeatedly. Additionally, by building in time during each practice and competition day activity to reflect within the team on their performance as a whole and the leader’s specifically, the CLC curriculum encourages fun while ensuring the focus remains on improvement and sustained development.
Study Rationale and Research Questions. The ultimate objective of the CLC is to create a practice field for leadership learning. Within the current state of the field of leadership development, it exists as a unique pedagogical initiative, using competitive team environments to support students in fully engaging in their process of leader development. We believe that due to its uniqueness and relatively widespread integration into the higher education arena, with more than 75 schools (e.g., United States Air Force Academy, New York University, Miami University) having participated to date, its effectiveness should be empirically assessed and evaluated. Moreover, a recently published pilot study (Rosch & Headrick, 2020) included significant and positive results, but was unable to examine potential mediating variables like student class year or their experiences with coaches. Given these needs, we conducted a larger and more comprehensive design and analyzed the following research questions:
- To what degree does the CLC experience contribute to student leadership capacity development, both in the short-term and several months afterward?
- Do differences exist in capacity development across class year status? For example, do first-year students gain more or less from the experience than seniors?
- To what extent does the degree of support students feel from their CLC coach contribute to these students’ leadership capacity development?
Population, Sample, and Data Collection. The sample for our study was selected from the population of participants at all regional Collegiate Leadership Competitions that took place during the Spring semester 2019. These participants were all students enrolled at two-year and four-year college and university institutions within the United States and Canada. All students were members of an institutional team. The group of institutions was diverse geographically and in terms of size, academic selectivity, control (public or private), and cost. Each participant was invited to complete an electronic survey at three separate times: at the beginning of their participation early in the Spring semester (pre-test); at the time of their Competition in April; and several months later, in August (post-test).
We created two samples for our data analysis in response to our research questions. Our first sample consisted of those Competition participants who completed the first two phases of data collection; our purpose was to ascertain how participant leadership capacity might have shifted during the period of team practices in anticipation of the Spring Competition. Our second sample was constructed from those Competition participants who completed all three phases of data collection, which allowed us to investigate both long-term capacity shifts and the degree to which shifts during the active phases of Competition practice may have led to lasting effects.
Within our first sample (those who completed a pre-test and an at-competition-test), 153 participants were included. Of those, 62% (n=96) identified as a woman. With regards to racial identity, 75% (n=116) identified as White; 6% (n=9) as Black/African-American; 8% (n=12) as Asian/Asian-American; 5% (n=7) as Latinx, and the remainder as either multi-racial or preferred not to answer. Approximately 16% (n=24) identified as a first-year student; 36% (n=55) as a sophomore; 26% (n=39) as a junior; 21% (n=13) as a senior; and 3% (n=5) as a graduate student. Approximately 7% (n=11) identified as an international student; and 13% (n=20) as a student who transferred from another higher education institution. From this larger sample, 47 participants also completed a post-test collected four months after their Competition concluded. Demographically, these 47 students did not significantly differ from the larger sample.
Instrumentation. To assess student leadership capacity, we employed the Ready-Willing-Able Leader (RWAL) scale, a 21-item survey instrument that assesses three broad bases for leadership capacity: motivation to lead (MTL), leader self-efficacy (LSE), and leadership skill (LSK). The MSL scale also consisted of three sub-scales: affective-identity motivation to lead (MTL-AI), non-calculative motivation to lead (MTL-NC), and social-normative motivation to lead (MTL-SN). Each item’s 7-point Likert-based response scale ranged from “strongly disagree” to “strongly agree.” The RWAL scale is relatively new, having been recently tested and validated (Rosch & Collins, 2020). Within our current study, the alpha reliability statistics that emerged from our analysis ranged from barely acceptable (0.63 within the LSE post-test) to quite strong (0.88 within the MTL-AI pre-test). Given these results, we elected to continue with our analysis.
To measure coach support, we used an adapted version of the Mentoring for Leadership Development scale created and employed within the Multi-Institutional Study of Leadership Development (MSL). Researchers associated with the MSL have divided the Mentoring scale into two subscales – one associated with mentoring focused on leadership development specifically, and one focused on personal development more broadly (e.g., Campbell et al., 2012). We found an extremely high alpha reliability shared across all ten items (0.93) and therefore chose to collapse the two scales into one. Within each item, we slightly adapted language to reflect the CLC environment and each participant’s CLC-assigned coach rather than referring to a participant-identified mentor. A sample item from our adapted survey was, “When thinking about your CLC Coach, this person helped you to value teammates from diverse backgrounds.” As within the MSL, we utilized a Likert scale ranging from “strongly disagree” to “strongly agree.” We also chose to distribute this particular scale only within the at-competition-test, when participants would presumably be most accurate in determining the degree of support they received from their Coach given the time proximity between receiving coaching and completing the survey.
Data analysis. The purpose of our research study was to examine the degree to which the CLC experience contributed to student leadership development over short-term and longer-term timeframes. If statistically significant changes emerged in particular aspects of our analysis, we then sought to investigate the degree to which participants’ class year and perceived support from their CLC Coach might moderate such development. We employed the following data analytic design:
- We first conducted matched sample t-tests, comparing pre-test scores to participants’ at-competition scores as well as their post-test scores (measured several months later).
- We then focused on the aspects of student leadership capacity where significant differences emerged. To investigate the degree to which class year served as a moderator to capacity development, we conducted a mixed-design analysis of variance, also commonly referred to as “SPANOVA” (which stands for “Split-plot Analysis of Variance”). Within each SPANOVA design, we included the class year as a between-subjects independent variable and the relevant aspect of leadership capacity as the within-subjects dependent variable. Where traditional ANOVA is limited to one cross-sectional dependent variable dataset, SPANOVA can include multiple data collection points spread over time (Huck & McLean, 1975), which is a more powerful way to measure change rather than using arithmetic to create a single “change” score. While multi-level modeling would be even more powerful than SPANOVA, the size of our longitudinal dataset (n=47) unfortunately did not provide enough statistical power to conduct such analyses rigorously.
- Lastly, we employed a more common series of hierarchical multiple regressions to investigate the effect of the CLC Coach on leadership capacity development over time. We elected to use regression – rather than SPANOVA – when focused on coaching effects because we were interested in understanding how the CLC coach might affect student leadership development at specific points in time: at their Competition and then again when measured months later. We created an “At Competition” change score calculated by the difference between their pre-test and at-competition leadership capacity score, and a “Post-competition” score by calculating differences between participants’ pre-test and post-competition scores. We also included a series of demographic variables to both control for variation within demographic groups when analyzing change over time as well as to investigate the relative degree of power each demographic variable might have when predicting such change.
We display the overall means and dispersion statistics for each of the leadership capacity dependent variables, as well as the scale of perceived coaching support, in Table 1. Our initial analysis of leadership capacity change over time consisted of calculating a series of match-sample t-tests, comparing participants’ pre-competition scores to their scores at competition, and again several months later post-competition. We displayed these analyses in Table 2 and included Cohen’s d scores for statistically significant results (p<.05) to show relevant effect sizes. In general, participants reported their highest level of leadership capacity at their Competition, and their scores tapered over time when measured after that. No statistically significant results emerged longitudinally related to participant affective-identify or social-normative motivation to lead, so our subsequent analyses do not reflect this aspect of leadership capacity.
Overall Student Leadership Capacity Means, Coaching Support, and Dispersion
|Scale||Pre-competition µ (σ)||At-competition µ (σ)||Post-competition µ (σ)|
|MTL-AI||4.91 (1.11)||5.02 (1.11)||4.91 (1.01)|
|MTL-NC||5.61 (1.10)||5.76 (1.02)||5.87 (0.88)|
|MTL-SN||5.85 (0.78)||5.98 (0.69)||5.92 (0.71)|
|LSK||5.96 (0.65)||6.20 (0.62)||6.07 (0.80)|
|LSE||5.30 (0.72)||5.71 (0.68)||5.79 (0.62)|
|Coach support||6.42 (0.68)|
Note: MTL-AI (Motivation to Lead Affective-Identity); MTL-NC (Motivation to Lead Non-Calculative); MTL-SN (Motivation to Lead Social-Normative); Leadership Skill (LSK); Leader Self-Efficacy (LSE)
T-test Results Measuring Leadership Capacity Change
|Scale||t (df)||p||d||t (df)||p||d|
|MTL-AI||-0.57 (152)||.57||-0.29 (46)||.78|
|MTL-NC||3.51 (152)||.001||0.57||0.98 (46)||.33|
|MTL-SN||1.91 (152)||.06||-0.56 (46)||.58|
|LSK||5.45 (152)||<.001||0.88||0.44 (46)||.66|
|LSE||7.67 (152)||<.001||1.24||4.71 (46)||<.001||1.39|
Note: MTL-AI (Motivation to Lead Affective-Identity); MTL-NC (Motivation to Lead Non-Calculative); MTL-SN (Motivation to Lead Social-Normative); Leadership Skill (LSK); Leader Self-Efficacy (LSE)
While some aspects of participants’ scores did not statistically differ over time, their scores related to their non-calculative motivation to lead, leadership skill, and leader self-efficacy did vary greatly. Therefore, we conducted three SPANOVA analyses, one each examining change in non-calculative motivation to lead, leadership skill, and leader self-efficacy, while also including participants’ class year as an additional between-subjects factor. Analyses that examined non-calculative motivation to lead and leadership skill did not yield a main effect (signifying that participants’ scores did not significantly change over time) nor an interaction effect (signifying that class year did not predict leadership capacity change over time). Within the non-calculative motivation to lead analysis, the main effect results were F=0.57 (df=2); p=.57, and for the interaction effect were F=0.81 (df=8); p=.60. Within the leadership skill analysis, the main effect results were F=0.37 (df=2); p=.72, and for the interaction effect were F=1.27 (df=8); p=.27. The line chart representing score changes over time by class year is displayed in Figure 1 related to non-calculative motivation to lead and Figure 2 related to leadership skill. These lines imply that, while some groups of students gained capacity while preparing for their Competition, other than first-year students’ leadership skill and seniors’ non-calculative motivation to lead, these gains were not sustained over time.
Non-calculative Motivation to Lead Scores Over Time.
Leadership Skill Scores Over Time.
Our analysis related to leader self-efficacy resulted in statistically significant results. A main effect emerged (signifying that participants’ scores changed longitudinally); F=6.18 (df=2); p=.002, while the interaction effect of class year on time was not significant; F=0.74 (df=8); p=.66. These results suggested that while participants’ scores increased and were sustained over time, their class year was not a factor in affecting their increase. Figure 3 graphically displays participants’ scores and implies that participants’ gains were made while preparing for their competition, and that, other than the very small group of graduate students, they were generally sustained long after.
Leader Self-efficacy Scores Over Time.
The final aspect of our analysis included the degree of support participants reported receiving from their coach as our main moderator variable of interest in leadership capacity development. We examined two separate dependent variables score increases – leadership skill (LSK) and leader self-efficacy (LSE) – at two separate points in time: (a) Change from the beginning of practice to the date of Competition; and (b) Change from the beginning of practice to several months after the Competition had concluded. Therefore, we conducted four separate hierarchical multiple regressions. Within each, the first block of variables included students’ gender identity and class year (freshman through graduate student). The second block consisted of students’ relevant incoming leadership capacity score, while the third block consisted of the Mentoring for Leadership Development score reported by students regarding their CLC coach. The results of our set of four analyses are in Tables 3, 4, 5, and 6. The results, in sum, indicate that the coaching support students received significantly predicted their leadership capacity scores at-competition, even when controlling for pre-existing capacity, gender identity, and class year. However, several months later, support from their coach in preparation for the Competition was no longer statistically relevant to their current capacity.
Leadership Skill (LSK) At-Competition Regression Results
|Block 1 𝞫||Block 2 𝞫||Block 3 𝞫|
Note: * p<.05; ** p<.01; ***p<.001
Leadership Self-efficacy (LSE) At-Competition Regression Results
|Block 1 𝞫||Block 2 𝞫||Block 3 𝞫|
Note: * p<.05; ** p<.01; ***p<.001
Leadership Skill (LSK) Post-Competition Regression Results
|Block 1 𝞫||Block 2 𝞫||Block 3 𝞫|
Note: * p<.05; ** p<.01; ***p<.001
Leader Self-efficacy (LSE) Post-Competition Regression Results
|Block 1 𝞫||Block 2 𝞫||Block 3 𝞫|
Note: * p<.05; ** p<.01; ***p<.001
This study evaluated the effectiveness of the CLC, a unique student leadership development initiative in the United States & Canada. Our specific overarching goal was to investigate the short-term and long-term associations of student participation with CLC and their self-reported leadership capacity development using the Ready, Willing, and Able Leader Scale (Rosch & Collins, 2020). The results of our study suggest that students’ scores on the scale increased from the initial point of participation through the weekend of their competition within almost all areas of leadership capacity, with large to extremely large effects seen in students’ leadership skills and leader self-efficacy. Investigating longer-term effects, students’ self-reported leader self-efficacy scores remained elevated several months after the conclusion of all CLC-related activities, suggesting the CLC might provide benefits in this particular area of leadership development.
To investigate the durable effects of participating in the CLC across different class years, we conducted a repeated-measures analysis of variance for each of the three aspects of leadership capacity that our initial t-tests suggested indicated score differences over time: non-calculative motivation to lead, leadership skill, and leader self-efficacy. Our results suggested that no long-term effects emerged related to leadership skill and motivation, which aligns with an earlier study (Rosch & Headrick, 2020) conducted using data from previous CLC events. However, durable long-term effects did emerge related to participant leader self-efficacy development. Relevant to our questions regarding class year differences, no significant effects emerged, suggesting that students had consistent experiences within the CLC regardless of their class year at their institution.
We also investigated the degree to which perceived support from the CLC team’s coach predicated students’ scores. Our results suggested that the degree of perceived support received from the CLC coach alone predicted 2-4% of the variance in students’ skill and self-efficacy increases at the time of their Competition, even when controlling for students’ incoming capacity scores. Moreover, the standardized beta weights from our analysis indicated that coaching support was 60% as powerful an effect on the development of student leader self-efficacy as their pre-test score in leader self-efficacy, which itself is noteworthy. Similarly, the effect of coaching support was 33% as powerful an effect on students’ leadership skill as their skill pre-test. However, we also saw a noteworthy tapering effect over time, as the effect of coaching support decreased to non-significance when measured several months after the students had ended their relationship with their coach.
Implications for Research and Practice. Upon reflection, it is fairly obvious that a participant’s coach might play a critical role in the growth of participants, which has emerged in programs designed for adult professionals (Joo, et al., 2012). As a result, there is an opportunity for program architects to better train and prepare coaches for the critical role that they play in the experience. Coaches tacitly understand their importance, but a more intentional and systematic focus on coaching leadership education may yield interesting results. In the context of this research study specifically, the CLC could encourage CLC coaches to establish periodic check-ins with team members, for example, to maintain a focus on competitors’ learning and development. Another initial step may be to design a research initiative that focuses on attributes or actions of effective coaches.
The role of the coach on the sustained impact of CLC on leader self-efficacy is also worth exploring. Self-efficacy is essential for behavior change and is predicted most strongly by prior success, vicarious experiences, and social persuasion (Bandura, 1994). While the CLC curriculum encourages teams to celebrate “small wins,” which highlight successes, by using specific and individualized feedback, the coach can guide the participants to a more accurate and strengths-based view of their performance. Furthermore, by discussing participation in CLC as a developmental opportunity as opposed to just a results-focused competition, coaches can frame perceived failures as opportunities to improve or to help participants set challenging goals for themselves which may improve self-efficacy. The challenge for coaches is to balance the desire for success (e.g., high performance at the competition) with the long-term developmental needs of participants through a zone of proximal development (Vygotsky, 1978). Throughout the process, coaches not only act as role models, facilitating participant development, and leading the team culture, but also as guides, providing a scaffolded and supportive environment where students’ leadership, followership, and team member capacities may effectively evolve. In effect, the coaches provide a group container (see Bion, 1959) for their students’ leadership learning experience and model acceptable reactions to success and failure. Thus, an exploration of the impact of team culture or learning orientation on self-efficacy developed by the coach may present an interesting avenue of future research.
Study Limitations. While this study represents analysis on a national dataset of diverse program participants and employed previously validated measures of student leadership development, several limitations were embedded in both its design and implementation. Perhaps most significantly, while a goal was to evaluation long-term leadership growth, the size of our sample limited us in our ability to implement the most rigorous analytic design possible (multi-level modeling). Future research that seeks to shed further light on similar research questions should include a larger sample size so that these methods can be included in the analytic design. In addition, qualitative study could serve to help explain and deepen our understanding of the interrelationships between the concepts we investigate. For example, examining the experiences of student participants and their perceptions of the benefits of their relationships with their CLC coach could shed light on processes that result in the quantitative data we have shared.
While the CLC might represent a broad example of how leadership educators might integrate inter-team competition and explicit coaching techniques and concepts into their curriculum, it remains only one example. Future research might branch into studies that focus on the CLC experience specifically and separately on the effects of competition and coaching on student leadership development more broadly.
It stands to reason that the most effective leadership development initiatives might employ multiple dimensions of learning within their curriculum – including behavioral, cognitive, constructivist, humanist, and social learning dimensions. The CLC is a growing national-scale program that uses explicit inter-team and coaching techniques that include significant aspects of each of the above dimensions. We conducted a year-long study to evaluate its short-term and durable effects on student participants’ leadership skill, leader self-efficacy, and motivation to lead. Our results suggested that, over four months of team practice and preparation, students grew in all three areas, and that their growth was sustained several months post-Competition related to leader self-efficacy. Student class year was not a significant factor in measured growth. The degree of perceived coaching support students felt emerged as a strong predictor of individual student development, especially over the course of the Competition experience. Our results suggest that inter-team competition may represent a powerful tool in leadership development efforts in higher education. Moreover, coaches may play a significant role in participant growth and should be examined more intentionally in future programs and research.
Allen, S. J. (2018). Deliberate practice: A new frontier in leadership education. Journal of Leadership Studies, 11(4), 41-43. https://doi.org/10.1002/jls.21555
Allen, S. J., Schwartz, A. J., & Jenkins, D. M. (2017). Collegiate leadership competition: An opportunity for deliberate practice on the road to expertise. In S. Kempster, A. F. Turner, & G. Edwards (Eds.) Field guide to leadership development (29-43). Edward Elgar Publishing. https://doi.org/10.4337/9781785369919.00008
Allen, S. J., Jenkins, D. M., & Buller, E. (2018). Reflections on how learning in other domains inform our approach to coaching leadership. Journal of Leadership Studies, 11(4), 58-64. https://doi.org/10.1002/jls.21559
Armstrong, M. B., & Landers, R. N. (2018). Gamification of employee training and development. International Journal of Training and Development, 22(2), 162-169. https://doi.org/10.1111/ijtd.12124
Bandura, A. (1977). Social learning theory. Prentice Hall. https://doi.org/10.1177/105960117700200317
Bandura, A. (1994). Self-efficacy. In V. S. Ramachaudran (Ed.), Encyclopedia of human behavior (Vol. 4, pp. 71-81). New York: Academic Press. (Reprinted in H. Friedman [Ed.], Encyclopedia of mental health. San Diego: Academic Press, 1998).
Barbour, J. (2006). Team building and problem-based learning in the leadership classroom: Findings from a two-year study. Journal of Leadership Education, 5(2), 28–40. https://doi.org/10.12806/v5/i2/ab3
Bass, B. M., & Avolio, B. J. (Eds.). (1994). Improving organizational effectiveness through transformational leadership. Sage Publications.
Bion, W.R. (1959). Attacks on linking. In E. Bott Spillius (ed.) Melanie Klein Today: Developments in theory and practice. Volume 1: Mainly Theory. 1988. Routledge. https://doi.org/10.4324/9780203358832-13
Blume, B. D., Ford, J. K., Baldwin, T. T., & Huang, J. L. (2010). Transfer of training: A meta-analytic review. Journal of Management, 36(4), 1065-1105. https://doi.org/10.1177/0149206309352880
Brungardt, C. L., Greenleaf, J. P., Brungardt, C. J., & Arensdorf, J. (2006). Majoring in leadership: A review of undergraduate leadership degree programs. Journal of Leadership Education, 5(1), 4–25. https://doi.org/10.12806/v5/i1/rf1
Cagiltay, N. E., Ozcelik, E., & Ozcelik, N. S. (2015). The effect of competition on learning in games. Computers & Education, 87, 35-41. https://doi.org/10.1016/j.compedu.2015.04.001
Campbell, C. M., Smith, M., Dugan, J. P., & Komives, S. R. (2012). Mentors and college student leadership outcomes: The importance of position and process. The Review of Higher Education, 35(4), 595-625. https://doi.org/10.1353/rhe.2012.0037
Chan, K. Y., & Drasgow, F. (2001). Toward a theory of individual differences and leadership: understanding the motivation to lead. Journal of Applied Psychology, 86(3), 481-498. https://doi.org/10.1037/0021-9010.86.3.481
Chiriac, E. H., & Granstrom, K. (2012). Teachers’ leadership and students’ experience of group work. Teachers and Teaching, 18(3), 345–363. https://doi.org/10.1080/13540602.2012.629842
Cocchi, L., Zalesky, A., Fornito, A., & Mattingley, J. B. (2013). Dynamic cooperation and competition between brain systems during cognitive control. Trends in cognitive sciences, 17(10), 493-501. https://doi.org/10.1016/j.tics.2013.08.006
Collegiate Leadership Competition (CLC). www.collegiateleader.org.
Miscenko, D., Guenter, H., & Day, D. V. (2017). Am I a leader? Examining leader identity development over time. The Leadership Quarterly, 28(5), 605–620. https://doi.org/https://doi.org/10.1016/j.leaqua.2017.01.004.
Decker, P. J., & Nathan, B. R. (1985). Behaviour Modeling Training: Principles and applications. Praeger Publishers.
Dicheva, D., & Dichev, C. (2015, October). Gamification in Education: Where Are We in 2015?. In E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 1445-1454). Association for the Advancement of Computing in Education (AACE).
Dugan, J. P. (2011). Pervasive myths in leadership development: Unpacking constraints on leadership learning. Journal of Leadership Studies, 5(2), 79–84. https://doi.org/10.1002/jls.20223.
Dugan, J. P. (2017). Leadership theory: Cultivating critical perspectives. Jossey-Bass Publishers.
Gokhale, A. A. (1995). Collaborative learning enhances critical thinking. Journal of Technology Education, 7(1), 1–8. https://doi.org/10.21061/jte.v7i1.a.2
Guthrie, K. L., & Jenkins, D. M. (2018). The role of leadership educators: Transforming learning. Information Age Publishing.
Hannah, S. T., & Avolio, B. J. (2010). Ready or not: How do we accelerate the developmental readiness of leaders? Journal of Organizational Behavior, 31(8), 1181–1187. https://doi.org/10.1002/job.675.
Hannah, S. T., Avolio, B. J., Luthans, F., & Harms, P. D. (2008). Leadership efficacy: Review and future directions. The Leadership Quarterly, 19(6), 669–692. https://doi.org/10.1016/j.leaqua.2008.09.007.
Harvey, M., & Jenkins, D. M. (2014). Knowledge, praxis, and reflection: The three critical elements of effective leadership studies programs. Journal of Leadership Studies, 7(4), 76–85. https://doi.org/10.1002/jls.21314
Hong, J. C., Hwang, M. Y., Lu, C. H., Cheng, C. L, Lee, Y. C., & Lin, C. L. (2009). Playfulness-based design in educational games: a perspective on an evolutionary contest game. Interactive Learning Environments, 17(1), pp. 15-35. https://doi.org/10.1080/10494820701483615
Huck, S. W., & McLean, R. A. (1975). Using a repeated measures ANOVA to analyze the data from a pretest-posttest design: A potentially confusing task. Psychological Bulletin, 82(4), 511-518. https://doi.org/10.1037/h0076767.
Humphreys, P., Greenan, K., & McIlveen, H. (1997). Developing work-based transferable skills in a university environment. Journal of European Industrial Training, 21(2), 63–69. https://doi.org/10.1108/03090599710161739
International Leadership Association. (2020). Directory of leadership programs. Retrieved from
Jenkins, D. M. (2016). Teaching leadership online: An exploratory study of instructional and assessment strategy use. Journal of Leadership Education, 15(2), 129-149 https://doi.org/10.12806/V15/I2/R3.
Johnson, D. W., & Johnson, R. T. (1989). Cooperation and competition: Theory and research. Interaction Book Company.
Joo, B. K. B., Sushko, J. S., & McLean, G. N. (2012). Multiple faces of coaching: Manager-as-coach, executive coaching, and formal mentoring. Organization Development Journal, 30(1), 19-38.
Keating, K., Rosch, D. M., & Burgoon, L. (2014). Developmental readiness for leadership: The differential effects of leadership courses on creating “ready, willing, and able” leaders. Journal of Leadership Education, 13(3), 1–16. https://doi.org/1012806/V13/I3/R1.
Kolb, D. A. (2014). Experiential learning: Experience as the source of learning and development. Pearson Education Press.
Landers, R. N. (2014). Developing a theory of gamified learning: linking serious games and
gamification of learning. Simulation & Gaming, 45(6), 752–68. https://doi.org/10.1177/1046878114563660
Lumsden, J., Edwards, E. A., Lawrence, N. S., Coyle, D., & Munafò, M. R. (2016). Gamification of cognitive assessment and cognitive training: a systematic review of applications and efficacy. JMIR Serious Games, 4(2), e11. https://doi.org/10.2196/games.5888
Marsick, V. J., Watkins, K. E., Callahan, W. M., & Volpe, M. (2009). Informal and incidental learning in the workplace. In M. C. Smith & N. DeFrates-Densch (Eds.), Handbook of research on adult learning and development (pp. 570–600). Routledge. https://doi.org/10.4324/9781315715926
Michaelsen, L. K., Knight, A. B., & Fink, D. L. (2004). Team-based learning: A transformative use of small groups in college teaching. Stylus.
Michaelsen, L. K., & Sweet, M. (2008). The essential elements of team-based learning. In L. Michaelsen, M. Sweet, & D. X. Parmelee (Eds.), New directions for teaching and learning, no. 116: Team-based learning: Small group learning’s next big step (pp. 7–27). Jossey-Bass. https://doi.org/10.1002/tl.330
Multi-Institutional Study of Leadership (MSL). www.leadershipstudy.net.
Noe, R. (2017). Traditional training methods. Employee Training and Development, 7th edition (pp.292-327). McGraw Hill Education. https://doi.org/10.1002/hrdq.21333
Nohria, N., Groysberg, B., & Lee, L. (2008). Employee motivation: A powerful new model. Harvard business review, 86(7/8), 78. https://doi.org/10.51976/gla.prastuti.v1i1.111203
Owen, J. (2012). Examining the design and delivery of collegiate student leadership development programs: Findings from the Multi-Institutional Study of Leadership (MSL-IS), a national report. Washington, DC: Council for the Advancement of Standards in Higher Education.
Podsakoff, P. M., MacKenzie, S. B., Moorman, R. H., & Fetter, R. (1990). Transformational leader behaviors and their effects on followers’ trust in leader, satisfaction, and organizational citizenship behaviors. The Leadership Quarterly, 1(2), 107-142. https://doi.org/10.1016/1048-9843(90)90009-7
Rabin, R. (2014). Blended learning for leadership: The CCL approach. [White Paper] Retrieved January 20, 2020, from the Center for Creative Leadership: www.ccl.org/wp-content/uploads/2015/04/BlendedLearningLeadership.pdf.
Rosch, D. M., & Collins, J. D. (2019). Peaks and Valleys: A Two-year study of student leadership capacity associated with campus involvement. Journal of Leadership Education, 18(1), 68–85. https://doi.org/10.12806/V18/I1/R5.
Rosch, D.M. & Collins, J.D. (2020). Validating the ready, willing, and able leader scale of student leadership capacity. Journal of Leadership Education, 19(1), 84-99. https://doi.org/10.12806/v19/i1/r3
Rosch, D.M. & Headrick, J. (2020). Competition as leadership pedagogy: An initial analysis of the Collegiate Leadership Competition. Journal of Leadership Education, 19(2), 1-12. https://doi.org/10.12806/v19/i2/r1
Rost, J. C. (1993). Leadership for the 21st Century. Praeger Publishers. https://doi.org/10.1002/hrdq.3920040210
Sailer, M., & Homner, L. (2020). The Gamification of Learning: a Meta-analysis. Educational Psychology Review, 32, 77–112. https://doi.org/10.1007/s10648-019-09498-w
Sanchez, D. R., & Van Lysebetten, S. (2017). Findings from a meta-analysis on training games and learning outcomes: Future directions. In 32nd Annual Conference of the Society for Industrial and Organizational Psychology.
Tauer, J. M., & Harackiewicz, J. M. (2004). The effects of cooperation and competition on intrinsic motivation and performance. Journal of Personality and Social Psychology, 86(6), 849. https://doi.org/10.1037/0022-35188.8.131.529
Taylor, P. J., Russ-Eft, D. F., & Chan, D. W. (2005). A meta-analytic review of behavior modeling training. Journal of Applied Psychology, 90(4), 692. https://doi.org/10.1037/0021-9010.90.4.692
Tews, M. J., Michel, J. W., & Noe, R. A. (2017). Does fun promote learning? The relationship between fun in the workplace and informal learning. Journal of Vocational Behavior, 98, 46-55. https://doi.org/10.1016/j.jvb.2016.09.006
Thomas, S., & Busby, S. (2003). Do industry collaborative projects enhance students’ learning? Education + Training, 45(4/5), 226–235. https://doi.org/10.1108/00400910310478157
Vallerand, R. J., Gauvin, L. I., & Halliwell, W. R. (1986). Negative effects of competition on children’s intrinsic motivation. The Journal of Social Psychology, 126(5), 649-656. https://doi.org/10.1080/00224545.1986.9713638
Vandercruysse, S., Vandewaetere, M., Cornillie, F., & Clarebout, G. (2013). Competition and students’ perceptions in a game-based language learning environment. Educational Technology Research and Development, 61(6), 927-950. https://doi.org/10.1007/s11423-013-9314-5
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.