Introduction
While the field of leader development has been heavily studied in the professional and collegiate worlds, significantly less research exists on the formation of leadership competencies during the school-age years (Murphy & Johnson, 2011). Leadership is cited as a desirable trait by college admission officers and workplace professionals. Additionally, high school leadership exposure is correlated with increased adult earning (Kuhn & Wienberger, 2005). In the workplace, individual leader development is essential to the process of organizational leadership development which in turn is important to organizational success (Day & Harrison, 2007).
Many definitions of youth exist, including “the time of life when one is young; especially the period between childhood and maturity” and “the early period of existence, growth, or development” (Merriam-Webster, n.d.) For this sake of this article, youth is used to refer to the school age years of Kindergarten to 12th grade. Specifically, we studied eighth-grade students who could all be classified in the periods of early or late adolescence, spanning the age ranges of 10 to 14 and 15 to 19 respectively, depending on the student’s age (Santrock, 2009).
Although leadership development can be defined to encompass leader development, Day, Fleenor, Atwater, Sturm, and McKee (2014) parse the difference between leader development and leadership development in their review of the past 25 years of research and theory advancing leader and leadership development: “Leader development focuses on developing individual leaders whereas leadership development focuses on a process of development that inherently involves multiple individuals (e.g., leaders and followers or among peers in a self-managed work team)” (p. 64). In this way, this construct deals with the individual leader development of the measured individuals and not of the collective group leadership. As such, this measure benefits quantitative researchers and practitioners by providing a tool through which the LSE construct can be benchmarked in youth, hopefully, for the purposes of increasing this capacity and ultimately youth current and future capacity for leadership in society.
Conceptual Framework. Described as the Father of Social Cognitive Theory and of self-efficacy, Albert Bandura states that social cognitive theory “analyzes developmental changes across the life span in terms of evolvement and exercise of human agency” (Bandura, 2006, p. 1). It investigates the relationship and interplay between personal factors and outside influences over the course of life. Within this framework, efficacy plays a central role in affecting human agency and is a key “resource in self-development, successful adaptation, and change” (Bandura, 2006, p. 4). Put another way, “unless people believe they can produce desired effects by their actions, they have little incentive to act or to persevere in the face of difficulties” (Bandura, 2006, p. 3). As such, self-efficacy is conceptualized as “beliefs in one’s capabilities to organize and execute the course of action required to produce given attainments” (Bandura, 1997, p. 3). However, Bandura specifies that perceived self-efficacy can vary across different domains and generalized measures of self-efficacy have limited explanatory and predictive value because they may have limited relevance to that domain; he recommends tailoring the measures to the specific domains of interest (Bandura, 2006).
We use a variant of Bandura’s (1986) definition of self-efficacy to define leader self-efficacy (the key construct measured by the scale) as a leader’s judgments of their capabilities to organize and execute courses of action required to attain designated types of leadership outcomes. Over the past 15 years, leader self-efficacy (LSE) has found interest among researchers but this concept has yet to be specifically adapted and applied to measurement in youth. Within the context of a greater project that examines various leadership qualities and perspectives across multiple constituencies at the school, student leader self-efficacy was chosen as the focus for this scale creation because of its implications for enhancing the impact on leader development (Hannah, Avolio, Luthans, & Harms, 2008). Recent literature suggests that there is merit in focusing on this construct through youth leader development programs (Rehm, 2014). Additionally, self-efficacy is a particularly salient construct for youth that can be enhanced through activities, incentives, and experiences (Bandura, 1993).
A closely related but distinct construct is leader developmental efficacy. Murphy and Johnson (2016) parse the difference between leader self-efficacy and leader developmental efficacy as follows: “leader self-efficacy, focuses on her beliefs about her ability to succeed as a leader, and the second, leader developmental efficacy (LDE), focuses on her beliefs about her ability to change and develop her current leadership skills (p. 73).” LDE is a newer concept than LSE and currently lacks extensive adult measurement and research from which to draw. Regardless, they recommend that “rather than using general measures of confidence or general self-efficacy, a specific measure of leader self-efficacy for a particular leadership situation (e.g., college leadership, etc.) or LDE will more accurately gauge leaders’ beliefs in their capabilities (p. 80).” Following this discrepancy and a richer research base to utilize, our scale focuses on LSE specifically within a middle school population.
Literature Review
While multiple studies have investigated LSE and several scales have been created to support these studies, no scale exists specifically for measuring LSE in youth. The most comprehensive survey of LSE scales to date was conducted in 2012 by Hannah, Avolio, Walumbwa, and Chan as the background to their proposed Leader Self and Means Efficacy Scale. Their research found four scales which were used in 16 subsequent studies. An additional 14 studies created used unique scales catered to their specific situation. None of these scales involved youth below the college ages. According to Hannah et al. (2012), the four scales used in subsequent work were (a) Murphy’s (1992) unpublished doctoral dissertation which spawned nine studies, (b) Feasel’s (1995) Master’s degree thesis which was used in two additional studies, (c) Paglis and Green’s (2002) measure which was applied in three research settings, and (d) Kane and Baltes’ (1998) unpublished conference paper which was used in an additional two studies. All of these studies used a single dimension measure of LSE as either perceptions of general leadership capabilities or of confidence to lead. Additionally, Bobbio and Manganelli developed an LSE measure in 2009, part of which was adapted from the Paglis and Green (2002) scale but the majority of which was newly created, however, this measure was not referenced by Hannah et al. (2012). Like the McCormick, Tanguma, and López study of 2002 which utilized the Kane and Baltes’ (1998) scale, this study was available in the public domain and provided underlying constructs (see Table 1).
An additional study not covered by Hannah et al. (2012) was the only study found relating to youth regarding LSE measurement. This is a version of the 26 item Roets Rating Scale for Leadership that was adapted into Chinese for use in Hong Kong with 8th grade students using 15 items (Chan, 2000; Chan, 2007). However, the LSE components contained in the Chinese RRSL-15 scale (Have strong convictions, Have self-confidence, Can say opinions in public, Think one can do well as a leader) do not correspond greatly with existing understandings from other LSE measures.
While the literature supports the theoretical construct of leader self-efficacy, this has not been applied effectively to youth. There is a gap in the literature regarding measurement of these beliefs during adolescence. Our scale provides an instrument which can be used to further the understanding of this critical construct during this sensitive life stage.
Methods
Sample. This paper utilizes data from a larger 2016-17 study at a leading private school partnering with the Center for Creative Leadership to examine various facets of leadership, leadership programming, and leadership development within their organization. The goal of the overall project was to support the school community in efforts to create a common leadership language and positive leadership experiences for students, teachers, and community members while also contributing to the generalizable knowledge of youth leader development. The project team gathered information about leadership from the perspectives of students, teachers, and families in order to facilitate reflection and decision-making.
Our study examines a subset of data from survey items related to leader self-efficacy of the eighth-grade participants in this leadership development initiative. We utilized the quantitative surveys collected both before and after the eighth-grade pilot leadership development program. All 120 eighth-grade students who participated in the pilot study this year were asked to take both the baseline survey in Fall 2016 and the end of year survey in Spring 2017. Students for whom parental permission was not received were eliminated from the analysis.
Measurement Development. As measuring youth leader self-efficacy is still a new area of inquiry, it was necessary to create an original survey instrument to measure this construct. Since the survey utilized with the eighth-grade students was created in partnership with the Center for Creative Leadership and the school, existing questions from CCL item bank were used so that comparisons could be made in the larger student population and integrated with a greater body of work at the CCL. Measures were analyzed from previous LSE studies and conceptually mapped based on the underlying construct in two LSE scales (Bobbio & Manganelli, 2009; McCormick, Tanguma, & López, 2002). These scales were chosen for conceptual mapping because they reside in the public domain. Five additional questions were added to existing CCL questions to supplement underrepresented subcomponents of the conceptual mapping. Table 1 displays the constructs from the two LSE scales and the applicable questions used in the school survey which was given to the eighth-grade students.
Table 1
LSE Measures Coordinated to Previously Published LSE Concepts/Dimensions
Kane and Baltes (1998) | Bobbio and Manganelli (2009) | Applicable Questions in Survey | Item # |
Perform well as a leader across different group settings | Showing self-awareness and self-confidence | I believe I have the ability to be a leader | 13 |
I see myself as a leader
I am aware of my own strengths (things that I’m good at) and what areas I need to develop |
1
2 |
||
I know how I can help make my world a better place | 9 | ||
I know how to be a leader | 12 | ||
Motivate group members | Motivating people | I can help others work hard on a task | 14 |
Build group members’ confidence | Starting and leading change processes in groups | I can help others feel good about what we are doing | 15 |
Develop teamwork | Gaining consensus of group members | I value working with other people in groups | 3 |
I work well with others and share leadership in order to solve problems effectively | 6 | ||
“Take charge” when necessary | I can take charge when necessary | 16 | |
Communicate effectively | Building and managing interpersonal relationships within the group | I can communicate effectively with others | 17 |
I think making friends and developing relationships with others can help us all to succeed | 8 | ||
Develop effective task strategies | I look at challenges in different ways in order to find the best solution | 4 | |
Before I act, I create a plan for achieving goals that identifies possible outcomes and consequences | 7 | ||
When I have to do something (an assignment, a task) or make a decision, I think through it first and decide what’s important
|
5 | ||
Assess the strengths and weaknesses of the group | Choosing effective followers and delegating responsibilities | I understand who is better at different tasks within a group | 18 |
Additional items related to LSE included in survey by CCL |
I believe that leadership can be taught |
11 |
|
Becoming a good leader takes time | 10 |
Note: Updated Student Leadership measures are available through the Center for Creative Leadership. Please contact Micela Leis (leism@ccl.org) for details.
All items are derived from the same underlying theoretical and empirical constructs of previous scales yet are catered to the youth population involved in this project. By utilizing the strengths of prior instruments, this scale aimed to capture the key components of the LSE concept while reflecting the different audience. The goal of modifying questions to create a new instrument was to provide a robust perspective on the LSE of the eighth-grade students participating in the pilot study.
Analysis and Interpretation. In order to create the most efficient measure possible, the LSE scale was analyzed in three phases: readability analysis, inter-item and item-total correlations, and factor analysis. Specifically, a principal component analysis (PCA) was conducted to examine the dimensionality of the different factors from the quantitative baseline data collected of all students surveyed. Since some questions were designed specifically for the school and as such had never been tested before, PCA helped eliminate excessive or unproductive items and evaluate if the items represented one latent factor of LSE. These three phases were utilized to increase the reliability and validity of the scale through item reduction.
Limitations. The main limitation of this study is that this measure and its associated classical test-theory assumptions (e.g., use of Cronbach’s alpha) are sample-dependent and were tested using a non-representative sample of eight-grade students in a private school setting (see Embretson & Reise, 2000). Certainly, this sample is non-generalizable to all eight-graders in the United States nor an international context. We hope, however, that future studies may build upon our initial evidence of scale reliability and evaluate the use of this measure among increasingly heterogeneous populations of youth.
Results
Readability Analysis. A readability analysis was conducted utilizing three tests available on the website readability.io in order to evaluate each individual item as well as the scale as a whole. From the many possible tests, the three tests chosen represent different approaches in assessing readability: the Flesch-Kincaid Grade Level (FKGL), the Gunning-Fog Score (GFS), and the Automated Readablity Index (ARI). The FKGL calculates a score using sentence length as measured through the number of words per sentence and also based on word length as measured by the number of syllables in the words. GFS incorporates word complexity as judged by a syllabic threshold in its formula as well as words per sentence (Child, 2017). ARI utilizes character count and not syllables in addition to words per sentence to measure readability (The Automated Readability Index, 2017).
If two or more of the tests for an individual scale item computed a score above eighth grade, the item was subsequently eliminated from the scale. The scale item numbers represented grade level equivalence, and as such, any item with two test scores 8.0 or greater was rejected. This resulted in the removal of five items: 5, 6, 7, 8 and 17. Scale scores were also calculated for the scale in totality both before and after item removal. The elimination of these five items resulted in the reduction in the grade level scores for the entire scale; all three readability tests for the whole scale were below 8.0 after item removal.
Inter-Item and Item-Total Correlations. The remaining 13 items in the LSE scale were then analyzed using inter-item and item-total correlations. The inter-item correlation matrix showed that all values were positive except Item 1 and Item 10 which had a slightly negative correlation. Item-total correlations revealed that Item 10 had the smallest item-total correlation and removing this item would increase internal consistency (α = .842) by .002. Additionally, inter-item correlations for Item 10 were all less than .3 while Item 1 had four correlations over .3. Therefore, Item 10 was removed.
Inter-item and item-total correlations we then re-calculated for the new 12-item scale (see Tables 2 and 3). The inter-item correlation matrix revealed no negative correlations and all items with at least one correlation above .30. Furthermore, the item-total correlations indicated that removing two items, Item 3 and Item 11, would have improved internal consistency (α = .844) by .001 and .006 respectively. Although, these items also had the lowest item-total correlations would have increased internal consistency slightly, the decision was made to keep these items in the scale at this stage based on their highest inter-item correlations which were .42 for Item 3 and .31 for Item 11; both of these correlations occurred with Item 15.
Table 2
Inter-Item Correlation Matrix for 12 Item LSE Scale
Item 1 | Item 2 | Item 3 | Item 4 | Item 9 | Item 11 | Item 12 | Item 13 | Item 14 | Item 15 | Item 16 | |
Item 1 | |||||||||||
Item 2 | .25 | ||||||||||
Item 3 | .16 | .18 | |||||||||
Item 4 | .20 | .21 | .29 | ||||||||
Item 9 | .29 | .22 | .21 | .30 | |||||||
Item 11 | .17 | .16 | .17 | .11 | .28 | ||||||
Item 12 | .50 | .28 | .30 | .44 | .58 | .23 | |||||
Item 13 | .48 | .22 | .21 | .38 | .53 | .18 | .77 | ||||
Item 14 | .32 | .20 | .27 | .27 | .31 | .21 | .52 | .51 | |||
Item 15 | .34 | .23 | .42 | .27 | .45 | .31 | .50 | .55 | .54 | ||
Item 16 | .33 | .31 | .12 | .28 | .39 | .13 | .51 | .53 | .42 | .30 | |
Item 18 | .12 | .27 | .18 | .25 | .29 | .27 | .29 | .20 | .34 | .38 | .55 |
Table 3
Item-Total Correlations for 12-Item LSE Scale and Cronbach’s Alpha If Item Deleted
Corrected Item-
Total Correlation |
Cronbach’s Alpha
if Item Deleted |
|
Item 1 | .462 | .836 |
Item 2 | .363 | .842 |
Item 3 | .361 | .845 |
Item 4 | .438 | .838 |
Item 9 | .574 | .828 |
Item 11 | .315 | .850 |
Item 12 | .746 | .814 |
Item 13 | .688 | .818 |
Item 14 | .581 | .828 |
Item 15 | .650 | .823 |
Item 16 | .565 | .828 |
Item 18 | .457 | .836 |
Factor Analysis. The final stage of the youth LSE scale creation involved factor reduction through principal component analysis (PCA). Assumptions were first analyzed before then performing the PCA.
Assumptions for factor analysis. Factorability of these 12 items was further examined through sampling adequacy. The Kaiser-Meyer-Olkin measure was .83, which is well above the .6 recommended threshold and classified as “meritorious” by Kaiser (1974, p. 35). The diagonals of the anti-image correlation matrix were all above .67, well above the minimum recommended of .5, and all but three were equal or above .80 which is considered ideal. Bartlett’s Test of Sphericity was significant (χ2 (66) = 367.08, p < .01), suggesting that the data was factorizable. These indicators all suggest that factor analysis was appropriate to conduct because of the shared common variance among the items.
However, factor analysis assumes no outliers, so a 12-item difference score was calculated and utilized for descriptive statistics. Two outlier cases were identified through the boxplot. Since the outliers juxtaposed and evaluation of these data points revealed potential for user fatigue by entering all of the same responses during one administration of the survey, these outliers were removed. This decreased the mean by less than 0.002 and decreased the standard deviation by 0.05.
Given the sample size and exploratory nature of this measurement study, inter-item and item-total correlations we then re-calculated for the 12-item scale excluding the outliers. Item 11 was subsequently removed because its highest inter-item correlation decreased below .30. Removing this item increased the newly calculated reliability statistic (α = .836) back to .844.
Principal component analysis. Principal component analysis was then conducted on the remaining 11 items. Principal component analysis was chosen since the primary research interest was reducing the number of variables (Tabachnick & Fidell, 2013, p. 640). Since the measure targeted the specific construct of youth LSE and therefore the likelihood of correlation was high among factors, an oblique Promax rotation was preferred to allow for correlation between the factors and to clarify which variables did and did not correlate (Tabachnick & Fidell, 2013, p. 644-5). The analysis returned three factors with Eigenvalues greater than 1.0, explaining 40.4%, 10.1% and 9.6% of the variance, 60.1% in total. However, examination of the scree plot revealed the potential for a one factor solution (see Figure 1) with an Eigenvalue of 4.448 for the first factor. Although multiple factor solutions and rotations were explored in search for simple structure (Thurstone, 1947), the Promax rotation (to allow for correlation between the items) with an unforced three factor solution was the most revealing. Items 18, 16, and 2 loaded on Factor Two which had an Eigenvalue of 1.115 while Items 3, 15 and 4 loaded on Factor Three which had an Eigenvalue of 1.052. These items were eliminated to reduce the scale to the items loading only on the first factor because the five-item solution offered the most explanation.
Figure 1. Scree Plot of 11-Item Scale
Additionally, communalities were checked and the loadings were also acceptable with three in excess of .71 which is considered excellent, an additional item in excess of .55 which is considered good, and the last item above .32 which is considered poor (Tabachnick & Fidell, 2013, p. 654).
Our factor analysis sought to derive the optimal factor to measure LSE through a weighted average measure. Of all the factors, Factor One items were most strongly linked to existing definitions of LSE. Therefore, after consultation with the theoretical framework and item text, these five items were retained to form the youth LSE scale.
A weighted sum score was utilized in order to balance the uneven loadings of the items on the factor (DiStefano, Zhu, & Mindrila, 2009). The weight was created using the percentage of the item factor loading in relation to the sum of the factor loadings; the proportion of the factor loadings was maintained in the weighting but the total was recalibrated to 100%. Item numbers, their corresponding questions, and factor loadings are shown on Table 4. The five-item weighted youth LSE scale had a high level of internal consistency at pre-test as measured by a Cronbach’s alpha of 0.818 with confidence intervals of 0.749 and 0.872. Additionally, reliability statistics were calculated with post-test data and the Cronbach’ alpha was 0.720 with confidence intervals of 0.613 and 0.805. The overlap in confidence intervals strengthens our confidence in internal consistency of the scale and differences between the pre-test and post-test alphas could potentially be explained through the small sample size (n = 87) which grew smaller at post-test (n = 83).
Table 4
Item Numbers, Questions, and Item Loadings for Youth LSE Factor (α = .818)
Corresponding Question | Factor Loading | |
Item 13 | I believe I have the ability to be a leader. | .93 |
Item 12 | I know how to be a leader. | .85 |
Item 1 | I see myself as a leader. | .82 |
Item 9 | I know how I can help make my world a better place. | .60 |
Item 14 | I can help others work hard on a task. | .38 |
Discussion and Implications
The creation of the youth leadership scale through this study represents a notable significant contribution to the future study of this topic and holds implications for potential educational initiatives. Specifically, many schools and youth development organizations tout leader development as an educational outcome but often lack empirical understanding of the underlying constructs and how they are measured. Vast potential exists for these institutions to intentionally craft and measure learning experiences to prepare all of their students more fully for future leadership opportunities by using this scale. These organizations, including both curricular and after-school programs, could benefit from deliberately seeking to test and develop the LSE of youth involved in their programming. Examples of organizations beyond traditional schools with programs which could measure youth LSE include the YMCA, Boys and Girls Club, 4-H, FFA, Boy Scouts, Girl Scouts, and local chamber of commerce youth leadership development programs. Furthermore, some public and charter schools are also now seeking to address leadership as an outcome for their students, and this scale could increase their understanding of the programs that they implement to better prepare all youth. As the literature demonstrates (Curran & Wexler, 2017; Murphy & Johnson, 2016), such programs provide an opportunity to impact more students and increase their belief in their ability to lead thus hopefully widening the future leadership pool of business, education, and civic leaders. Additionally, optimal levels of leader self-efficacy may vary with context and desired student learning (Machida-Kosuga, 2017), which further adds to the need to measure this construct over multiple time points and situations throughout youth development.
Important to note is the opportunity this study presents to both measure leadership education outcomes and also guide the creation, implementation, and continuous improvement of these teaching and learning efforts. In this regard, we highlight a central component of self-efficacy: that it is specific to performance domains rather than an overall aspect of a student (e.g., Betz, 2000; Lent, Brown, & Hackett, 1994). In considering leadership self-efficacy among youth, we note that our measure contains items reflecting both beliefs (e.g., in one’s ability, self-concept as a leader) and knowledge (e.g., how to lead, helping make my world a better place, helping others). Taken together, we see enormous potential for programming that introduces leadership to youth not as an abstraction removed from context nor as a set of mandates, but instead as actions and relationships within contexts that might realistically confront youth. Notably, our measure indicates the importance of directing leadership education efforts toward student engagement in prosocial behaviors; those that help fellow students, the school, the community, the world. When creating leadership programming, we therefore suggest that educators attempt to creatively blend understandings of leadership with demonstrations of its impact, knowledge with action, and inspiration with education using developmentally-appropriate pedagogies and activities.
Turning to the measurement aspects of this study, we note our specific focus on ensuring that the survey was made applicable to youth in an effort to maximize instrument efficiency and reduce cognitive burden (Groves et al., 2009). Conducting readability analyses are, we believe, essential in creating new surveys aimed at psychometrically capturing affective traits amongst youth in connection with their learning and growth. Additionally, we note that our final scale evidences a notable degree of reliability (α = .818) for a five-item measure, suggesting that evaluating latent traits among youth might indeed benefit from the use of constructs with a relatively small number of items closely connected to developmental theories.
Finally, education at large and youth in every setting could benefit from programs focused on developing their LSE and utilizing this scale as an outcome measure could make these programs more targeted and efficient at achieving long-term educational benefits, demonstrated in high school, college and beyond. Is this new youth LSE scale predictive of future leadership initiative and success? This could be studied in various contexts of private, public, and charter schools as well as after school and community-based programs. Future research could be extended to other segments of this age group as well as other age groups. Prospective studies could also collect additional student data and use regression to control for environmental factors. Utilizing the scale created through this study can make this research less cumbersome and more accessible to both academics and practitioners. Future studies could also examine leader self-efficacy versus leader developmental self-efficacy and create a leader developmental self-efficacy measurement to complement our newly created youth LSE scale.
Conclusion
Leaders, particularly in educational settings, should strive to develop the LSE construct in their students in order to prepare them for the best possible future. This in turn could increase the potential leadership pipeline for organizations and communities. By expanding the leadership equation beyond the traditional path of high talent identification and training, and using innovative measures to demonstrate educational effectiveness, researchers and educators can ideally empower more individuals to address both local and global challenges as self-efficacious leaders (Van Velsor & Wright, 2012).
References
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall.
Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning. Educational Psychologist, 28(2), 117-148.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W.H. Freeman and Company.
Bandura, A. (2006). Self-efficacy beliefs of adolescents. F. Pajares & T. C. Urdan (Eds.). Greenwich, CT: Information Age Publishing.
Betz, N. E. (2000). Self-efficacy theory as a basis for career assessment. Journal of Career Assessment, 8(25), 205 – 222.
Bobbio, A., & Manganelli, A. M. (2009). Leadership self-efficacy scale: A new multidimensional instrument. TPM-Testing, Psychometrics, Methodology in Applied Psychology, 16(1), 3-24.
Chan, D. W. (2000). Assessing leadership among Chinese secondary students in Hong Kong: The use of the Roets Rating Scale for leadership. Gifted Child Quarterly, 44(2), 115-122.
Chan, D. W. (2007). Leadership competencies among Chinese gifted students in Hong Kong: The connection with emotional intelligence and successful intelligence. Roeper Review, 29(3), 183-189.
Child, D. (2017, August 15). Measure text readability. Retrieved from: https://readable.io/text/
Curran, T., & Wexler, L. (2017). School‐Based Positive Youth Development: A Systematic Review of the Literature. Journal of School Health, 87(1), 71-80.
Day, D. V., Fleenor, J. W., Atwater, L. E., Sturm, R. E., & McKee, R. A. (2014). Advances in leader and leadership development: A review of 25 years of research and theory. The Leadership Quarterly, 25(1), 63-82.
Day, D. V., & Harrison, M. M. (2007). A multilevel, identity-based approach to leadership development. Human Resource Management Review, 17(4), 360-373.
DiStefano, C., Zhu, M., & Mindrila, D. (2009). Understanding and using factor scores: Considerations for the applied researcher. Practical Assessment, Research & Evaluation, 14(20), 1-11.efficacy. 2011. In Merriam-Webster.com. Retrieved June 28, 2017, from https://www.merriam-webster.com/dictionary/efficacy
Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. New York, NY: Taylor & Francis.
Feasel, K. E. (1995). Mediating the relation between goals and subjective well-being: Global and domain-specific variants of self-efficacy. Unpublished master’s thesis, University of Illinois at Urbana-Champaign.
Groves, R., Fowler, F. Jr., Couper, M., Lepkowski, J. Singer, E., & Tourangeau, R. (2009). Survey methodology. Hoboken, NJ: Wiley-Interscience.
Hannah, S. T., Avolio, B. J., Luthans, F., & Harms, P. D. (2008). Leadership efficacy: Review and future directions. The Leadership Quarterly, 19(6), 669-692.
Hannah, S. T., Avolio, B. J., Walumbwa, F. O., & Chan, A. (2012). Leader Self and Means Efficacy: A multi-component approach. Organizational Behavior and Human Decision Processes, 118(2), 143-161.
Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39(1), 31-36.
Kane, T. D., & Baltes, T. R. (1998). Efficacy assessment in complex social domains: Leadership efficacy in small task groups. Paper presented at the annual meeting of the Society of Industrial and Organizational Psychology, Dallas, TX.
Kuhn, P., & Weinberger, C. (2005). Leadership skills and wages. Journal of Labor Economics, 23(3), 395-436. doi:10.1086/430282
Lent, R. W., Brown, S. D., & Hackett, G. (1994). Toward a unifying social cognitive theory of career and academic interest, choice, and performance. Journal of Vocational Behavior, 45, 79-122.
Machida‐Kosuga, M. (2017). The Interaction of Efficacy and Leadership Competency Development. New Directions for Student Leadership, 2017(156), 19-30.
McCormick, M. J., Tanguma, J., & López-Forment, A. S. (2002). Extending self-efficacy theory to leadership: A review and empirical test. Journal of Leadership Education, 1(2), 34-49.
Murphy, S. E. (1992). The contribution of leadership experience and self-efficacy to group performance under evaluation apprehension. Unpublished doctoral dissertation, University of Washington, Seattle, WA.
Murphy, S. E., & Johnson, S. K. (2011). The benefits of a long-lens approach to leader development: Understanding the seeds of leadership. Leadership Quarterly, 22(3), 459. doi:10.1016/j.leaqua.2011.04.004
Murphy, S. E., & Johnson, S. K. (2016). Leadership and Leader Developmental Self‐Efficacy: Their Role in Enhancing Leader Development Efforts. New Directions for Student Leadership, 2016(149), 73-84.
Paglis, L. L., & Green, S. G. (2002). Leadership self-efficacy and managers’ motivation for leading change. Journal of Organizational Behavior, 23, 215–235.
Rehm, C. J. (2014). An evidence-based practitioner’s model for adolescent leadership development. Journal of Leadership Education, 13(3), 83-97.
Santrock, J., (2009). Adolescence 13th ed. New York: McGraw-Hill.
Tabachnick, B. G. & Fidell, L. S. (2013). Using multivariate statistics (6th ed.). Upper Saddle River, NJ: Pearson Education Inc.
Thurstone, L. L. (1947). Multiple-factor analysis. University of Chicago Press.
The Automated Readability Index, (2017, August 25) retrieved from http://www.readabilityformulas.com/automated-readability-index.php.
Van Velsor, E., & Wright, J. (2012). Expanding the Leadership Equation: Developing Next-Generation Leaders. A White Paper. Center for Creative Leadership (NJ1).
Youth. (n.d.). In Merriam-Webster’s Learner’s Dictionary. Retrieved from http://www.merriam-webster.com/dictionary/youth