The first academic programs in leadership were founded in the late 1980’s and early 1990’s (Riggio, Ciulla, & Sorenson, 2003) and yet, only a few studies have explored the instructional or assessment strategy use of leadership educators. The rare exceptions investigated leadership pedagogy only through examining undergraduate-level leadership studies (Allen & Hartman, 2009; Eich, 2008; Jenkins, 2012, 2013; Zimmerman-Oster & Burkhardt, 1999), while the literature is outdated and scant in scholarship of teaching and learning at the graduate-level (Crawford, Brungardt, Scott, & Gould, 2002; Koch, Townsend, & Dooley, 2005; Mellahi, 2000; Mitchell & Poutiatine, 2001). More routinely, studies have focused on the use of specific pedagogies in graduate leadership education (Stewart, Houghton, & Rogers, 2012). And apart from Jenkins’s (2014) global study that compared the use of instructional and assessment strategies in online leadership education between undergraduate and graduate instructors, none of these studies included an international sample. Yet, per the International Leadership Association (ILA) Directory of Leadership Programs, of the more than 2,000 leadership programs that exist today, more than 500 offer graduate-level leadership courses or degrees and over 100 are based outside the U.S.! It is thus imperative that a snapshot of the instructional and assessment activity occurring in these programs is shared.
Purpose and Significance of the Study.The purpose of this study was to explore the differences in instructional and assessment strategy use between instructors who teach graduate- level (GL) and undergraduate-level (UL) leadership studies courses. A quantitative research design was used. Specifically, an international web-based questionnaire was used to measure the frequency of use of a defined group of instructional and assessment strategies by instructors who teach academic, credit-bearing face-to-face leadership studies courses. Per the impetus of Andenoro et al. (2013), this study sought to critically examine various curricular designs—at the GL and UL levels—to provide more evidence to aid the development of curriculum and propel the field of Leadership Education forward through positive impact and development of learners. To do so, the researcher explored the following research questions:
What are the most frequently employed instructional strategies used by instructors teaching academic, credit-bearing face-to-face GL and UL leadership studies courses?
With respect to frequency of instructional strategy use, what differences are there between instructors teaching GL and UL academic, credit-bearing face-to-face leadership studies courses?
What assessment strategies do instructors teaching academic, credit-bearing face-to-face GL and UL leadership studies course give the most weight in overall grading?
With respect to assessment strategy use, what differences are there between instructors teaching GL and UL academic, credit-bearing face-to-face leadership studies courses?
With the booming growth of leadership studies programs, the impetus for exploring the instructional and assessment strategy uses of leadership educators has never been greater. And while there is an abundance of literature exploring the use of specific pedagogies and best practices (e.g., case study, team-based learning) in both GL and UL leadership education, no research exists that explores instructional and assessment strategy use more generally and none that does so empirically compares practices between academic levels, or explores the aforementioned on a global scale. To explore the abovementioned research questions within the framework of leadership education, lists of commonly utilized instructional and assessment strategies were created. The selection of instructional strategies was informed by the empirical studies of Jenkins (2012, 2013) and Conger (1992), as well as Allen and Hartman (2008a, 2008b, 2009)—who created one of the first comprehensive lists of leadership development teaching methods found in the literature (see also Avolio, 1999; Day, 2000; Yukl, 2006)—also informed extensively the list of instructional strategies surveyed (see Table 2). The selection of assessment strategies was informed by many of the aforementioned scholars and practitioners who included data or resources on assessment techniques in higher or leadership education (see Table 3). Final selection for inclusion in this study was based on a combination of recommendations from a panel of experts, information gleaned from a pilot study, a review of the literature, and the researcher’s expertise and experience. Admittedly, all instructional and assessment methods have their pros and cons. Indeed, because learning leadership and developing leadership skills may be different than learning other content in a traditional classroom setting, leadership education may need different strategies for facilitating learning (Eich, 2008; Komives, Lucas, & McMahon, 2007; Wren, 1994). Accordingly, leadership education requires its own examination to determine how effective teaching and learning of leadership is done.
Relatedly, while the quality or use of specific instructional strategies in leadership education has only very recently been explored empirically (see Jenkins, 2012, 2013), the use of instructional strategies such as reflection (Densten & Gray, 2001; Guthrie & Jones, 2012), case study (Atkinson, 2014), service learning (Buschlen & Warner, 2014; Seemiller, 2006), teambuilding (Moorhead & Griffin, 2010), research leadership (Jones & Kilburn, 2005), self- assessments (Buschlen & Dvorak, 2011), role-play (Jenkins & Cutchens, 2012; Sogurno, 2003), and simulation (Allen, 2008) have been marginally explored separately. At the UL, Jenkins (2012, 2013) found that instructors who teach face-to-face academic, credit-bearing courses favor discussion-based pedagogies foremost, while group and individual projects and presentations, self-assessments and instruments, and reflective journaling were also used frequently.
Accordingly, Jenkins argues that discussion is the “signature pedagogy” of undergraduate leadership education, referring to Shulman’s (2005), former President of the Carnegie Foundation for the Advancement of Teaching, finding that signature pedagogies are distinctive ways of teaching that characterize the educational process in a specific profession (Shulman, 2005) or discipline (Gurung, Chick & Haynie, 2009). Similarly, Schmidt-Wilk’s “Editor’s Corner” in the Journal of Management Education suggested—though without any empirical bases or differentiation between GL and UL management education—that, “since their introduction into management education almost a century ago, cases have become ubiquitous” (2010, p. 492). Additionally, in her article, “Signature Pedagogy: A Framework for Thinking about Management Education,” Shmidt-Wilk includes projects—both temporary and goal- directed—that “typically involve students in design, problem solving, decision making and or investigative activities . . . and generally culminate in deliverables to some project client or sponsor” (DeFillippi & Milter, 2009, p. 351 as cited in Schmidt-Wilk, 2010, p. 493).
Comparing Instruction and Assessment Between Academic Levels. While age is often the chief characteristic mentioned when describing adult learners, arguably, the difference goes far beyond age and years (Holmes & Abington-Cooper, 2000; Pew, 2007; Plemmons, 2006). Moreover, “best practices” are more about what the faculty do in the classroom, and how they engage with students through instruction and feedback, and the relevance they create between the subject matter and their students (Bain, 2004; Fink, 2013; Chickering & Gamson, 1987). Yet, research does imply that some pedagogical resources, provisions, and orientations are more relevant for the outcome of mature students/graduates and that others are more so for the younger students (e.g., Yoshimoto, Inenaga, & Yamada, 2007). Moreover, as students develop cognitively, for example, from a dualistic (right versus wrong) view of the universe as they enter postsecondary education to a more relativistic view in their later studies (e.g., Perry, 1997), how students develop commitments and facilitate relationships, also place certain responsibilities on educators to develop frameworks and environments that facilitate learning at said stages (Plemmons, 2006). Arguably, similar decisions regarding relevancy and appropriateness of content and instruction may be made with respect to undergraduate students’ progress across the stages of the Leadership Identity Development Model (Komives, Owen, Longerbeam, Mainella, & Osteen, 2005), the developmental sequencing of leadership curricula (Dugan, 2013), and the degree of relationship between students’ work experiences and study (Yoshimoto, 2002). However, except for Yoshimoto et al. (2007), no research was found that compared instructional or assessment strategy use across academic levels and nations simultaneously.
Relatedly, just a few studies have explored the differences between teaching GL and UL courses generally (Pendse & Johnson, 1996) and only a few have compared specific instructional strategy uses such as Bruner, Gup, Nunnally, and Pettit’s (1997) look at the differences that emerged when teaching with cases to graduate and undergraduate students in a finance course.
Appropriately, Bruner et al. (1997), suggest “The professional attributes we want to foster in students should influence how we teach with case studies.” The authors argue that the goal of case teaching is not mastery, but is instead, “…to prepare students for effective professional work.” Arguably, the case studies included in Exploring Leadership: For College Students Who Want to Make a Difference (Komives, Lucas, & McMahon, 2013) and A Day in the Life of a College Student Leader (Marhsall & Hornak, 2008) are far more appropriate for undergraduate student leaders—even more so residential students—than cases filled with the conflicts affecting CEO’s of Fortune 100 companies (Jenkins & Allen, 2012). The instructor’s ability to put relevance first to activate student learning perhaps trumps any significant differences between GL and UL pedagogy.
Participants.The participants were 836—390 GL and 446 UL—instructors who self- reported teaching an academic, credit-bearing face-to-face leadership studies course within the previous two years. This is the largest reported study of these populations to date. After participants’ eligibility was confirmed—they had taught a course within the previous two years—they were asked to identify one specific corresponding GL or UL academic credit- bearing course, to type the name of that course in a textbox, and to use that course as a reference point when completing the survey.
The analyzed data was collected from a web-based questionnaire through an international study that targeted thousands of leadership studies instructors through three primary sources from March 31, 2013, through May 3, 2013. The first source was the organizational memberships or databases of the following professional associations/organizations or their respective member interest groups: (a) the ILA; (b) the Association of Leadership Educators (ALE); (c) NASPA Student Affairs Professionals in Higher Education, Student Leadership Programs Knowledge Community (NASPA SLPKC); and (d) the National Clearinghouse for Leadership Programs (NCLP). The second source was the attendee list of the 2012 Leadership Educators Institute (LEI), an innovative bi-annual conference-like forum geared specifically towards new to mid- level student affairs professionals and leadership educators who coordinate, shape, and evaluate leadership courses and programs, create co-curricular leadership development opportunities and experiment with new technologies for doing so. The third source was a random sample of instructors drawn from the ILA Directory of Leadership Programs, a searchable directory of leadership programs available to all ILA members.
While the first and second sources were more so “shotgun approaches,” they were also more likely to include ideal participants. While the ILA member database, ILA Directory of Leadership Programs, and LEI Attendee list provided access to members or attendees respectively, the researcher did not have access to the individual e-mails for the NASPA SLPKC, ALE, and NCLP groups. And, while the listserv managers did send out invitation e-mails to participate in this study’s survey to their respective listservs, return rates are not available due to the undisclosed number of recipients. Nonetheless, the return rates for the ILA member directory (12.57%), ILA Directory of Leadership programs (11.25%) and LEI (25.08%) were promising.
Overall, these data collection procedures provided the researcher with the best possible sources to generalize to the population. Demographic information was collected from participants in the survey to better understand the educational and preparatory experiences of leadership educators. Additionally, questions related to participants’ home institution, program, and department offerings were also included. Table 1 includes the most salient data from these questions. Of note, 41 countries were represented in the sample.
Demographic and Educational Majority Survey Data
35.2% “55 to 64”
23.8% “55 to 64”
Location of Institution
77.1% “USA” (5.7% in both
“Canada” and the “UK”)
88.9% “USA” (4.4%
54.6% “4-year Private University”
58.4% “4-year Public University”
College where Leadership course was located
32.3% “Business or
Management” (25.0% “Education”)
18.8% “Business or
Management” (12.2% “Academic Affairs, College-wide, General Education, or no affiliated college”)
Academic Department where Leadership course was located
Organizational Leadership, or Leadership Studies” (25.0% “Management”)
Organizational Leadership, or Leadership Studies” (7.3% “Management”)
12.1 % “Special/Multiple Topics” (6.9%
Leadership” (11.4% “Special/Multiple Topics”)
advanced, or upper level (35.7% Introductory)
Leadership Degree Offered
45.9% “Master’s” (20.3% “M.B.A.”)
36.8% “Minor” (33.2%
Primary Activity at Institution
58.1% “Full-time faculty” (15.8% “Part- time faculty or adjunct”)
45.5% “Full-time faculty”
(23.2% “Full-time staff/administration)
Years in Current Position
32.4% “more than 10 years”
35.6% “1-3 years”
Years Working in Higher Education
37.9% “11 – 20 years”
30.6% “more than 20 years”
Experience Teaching the Leadership Course Indicated
61.6% “More than 5 years”
42.2% “More than 5 years”
Average Class Size of Course Indicated
58.5% “15 – 29 students”
59.5% “15 – 29 students”
Experience Teaching Leadership
46.5% “More than 5 years”
62.6% “More than 5 years”
12.7% “Leadership” (12.0%
10.5% “College Student
Affairs, Development, or Personnel” (9.0% “Higher Education”)
Post-Baccalaureate focus on higher education, college teaching, college student development, or closely related field
Post-Baccalaureate focus on leadership
Completed Graduate-Level Leadership Coursework
Level Leadership Coursework
Type of Research Data. The analyzed data was collected from a web-based questionnaire through an international study. The questionnaire format of the web-based survey in this study implemented as many principles from Andres (2012) and Evans and Mathur (2005) as possible. The questionnaire was modeled after the approach used by Jenkins (2012, 2013) to collect data identifying the most frequently used instructional strategies for teaching face-to-face leadership studies courses to undergraduates. In this study, the survey instrument was used to collect demographic information to profile the participants and identify the most frequently used instructional and assessment strategies for teaching GL leadership courses.
Data Analysis Techniques. Answering research questions one and three involved creating a frequency tabulation and percentage of responses for the items on the survey that looked at instructional and assessment strategy use. Descriptive statistics were used to analyze the means and standard deviations of the item responses indicating frequency of instructional strategy use as well as the overall percentages of assessment strategy use. Participants were asked to describe their frequency of use of the list and associated definitions of instructional strategies listed in Table 2. Frequency of use of each strategy was ranked using the following scale:
Students examine written or oral stories or vignettes that highlight a case of effective or ineffective leadership.
Instructor facilitates sustained conversation and/or question and answer segment with the entire class.
Student teams argue for or against a position using course concepts, evidence, logic, etc.
Students engage in interactions in a prescribed setting and are constrained by a set of rules and procedures. (e.g., Jeopardy, Who Wants to be a Millionaire, Family Feud, etc.)
Students listen to a guest speaker/lecturer discuss their personal leadership experiences.
Students engage in a series of relationship-building activities to get to know one another.
In-Class Short Writing
Students complete ungraded writing activities such as reflective journals or responses to instructor prompts designed to enhance learning of course content.
Instructor presents information in 10-20 minute time blocks with period of structured interaction/discussion in-between mini-lectures.
Students listen to instructor presentations lasting most of the class session.
Students learn about leadership theory/topics through film, television, or other media clips (e.g., YouTube, Hulu).
Students learn about leadership through the experience of problem solving in specific situations.
Role Play Activities
Students engage in an activity where they act out a set of defined role behaviors or positions with a view to acquire desired experiences.
Self-Assessments & Instruments
Students complete questionnaires or other instruments designed to enhance their self-awareness in a variety of areas (e.g., learning style, personality type, leadership style, etc.).
Students participate in a service learning or philanthropic project.
Students engage in an activity that simulates complex problems or issues and requires decision-making.
Small Group Discussions
Students take part in small group discussions on course topics.
Story or Storytelling
Students listen to a story highlighting some aspect of leadership; often given by an individual with a novel experience.
Student Peer Teaching
Students, in pairs or groups, teach designated course content or skills to fellow students.
Students engage in group activities that emphasize working together in a spirit of cooperation (e.g., setting team goals/priorities, delegating
work, examining group relationships/dynamics, etc.).
The rating scale for assessment strategy use including the list and associated definitions listed in
Table 3 was designed to capture the overall weight instructors placed on each strategy with respect to students’ overall grades in their courses. Accordingly, participants reported the level of weight toward a student’s final grade each assessment strategy was given in their courses using the following rating scale:
1 – 0%, I do not use this type of assessment in my course
2 – 1-10%
3 – 11-20%
4 – 21-30%
5 – 31-40%
6 – 41-50%
51% or more
Face-to-Face Assessment Strategies: Descriptions
Students are given points for active participation in course activities.
Students complete tests or exams that last the majority of the class period intended to assess subject matter mastery.
Students work on a prescribed project or presentation in a small group.
Individual Leadership Development Plans
Students develop specific goals and vision statements for individual leadership development.
Major Writing Project/Term Paper
Students write a significant paper exploring course content or research (such as a literature review) as a major course assignment.
Observation/Interview of a Leader
Students observe or interview an individual leading others effectively or ineffectively and report their findings to the instructor/class.
Portfolio or evidence collection
Students document their own learning through the creation of a course portfolio.
Student complete short graded quizzes intended to assess subject matter mastery.
Students develop written reflections on their experiences or understandings of lessons learned about course content.
Read and Respond
Students are graded on their responses to questions generated by the instructor or from the end of the text chapter for the purpose of allowing students to explore specific ideas or statements in depth and breadth.
Students actively research a leadership theory or topic and present findings in oral or written format.
Students respond in writing to criteria set for evaluating their learning.
Students author one or more short papers (ten pages or less in
length) exploring course content.
Students physically represent learning through problem solving ability in relevant contexts.
Student Peer Assessment
Students critique other students’ work using previously described criteria and provide specific suggestions for improvement.
Students create short video presentations to be shown in class.
Answering research questions two and four—the comparison between the instructors who taught GL and UL courses—involved statistical analysis using independent t-tests using advanced statistical software. The analysis compared the means of responses of the frequency of use of the instructional and assessment strategies from the two groups of instructors. Additional discussion also includes Cohen’s d statistics for the two group comparisons.
Instructional Strategy Use Differences between Instructors Who Teach Face-to-Face Undergraduate- and Graduate-Level Leadership Studies Courses
Small Group Discussion
Self-Assessments & Instruments
Stories or Storytelling
Student Peer Teaching
Role Play Activities
In-Class Short Writing
Note: Of the 836 participants who reported having taught a face-to-face, academic credit-bearing leadership studies course within the last two years, only n = 622 (Graduate: n = 272; Undergraduate: n = 350) progressed through the survey to the questions represented in Table 4.
Assessment Strategy Use Differences between Instructors Who Teach Face-to-Face Undergraduate- and Graduate-Level Leadership Studies Courses
Major Writing Project/ Term Paper
Individual Leadership Development Plans
Observation/Interview of a Leader
Student Peer Assessment
Read and Respond
Portfolio or evidence collection
Note: Of the 836 participants who reported having taught a face-to-face, academic credit-bearing leadership studies course within the last two years, only n = 606 (Graduate: n = 263; Undergraduate: n = 343) progressed through the survey to the questions represented in Table 5.
Instructional Strategy Use in Face-to-Face Leadership Education. Overall, the instructors who taught GL, face-to-face leadership studies courses used discussion-based pedagogies such as Class Discussion (M = 4.75, SD = 0.46), interactive lecture/discussion (M = 4.25, SD = 0.82), and small group discussion (M = 4.12, SD = 0.89), case studies (M = 3.67, SD = 0.99), and self-assessments & instruments (M = 3.46, SD = 1.22) most frequently. Conversely, the same group used highly experiential skills-based instructional strategies such as role play activities (M = 2.73, SD = 1.19), debates (M = 2.67, SD = 1.24), simulation (M = 2.36, SD = 1.24), and games (M = 2.35, SD = 1.15) far less frequently, with in-class short writing (M = 2.26, SD = 1.03) and service learning (M = 1.88, SD = 1.13) most infrequently. In comparison, instructors who taught UL, face-to-face leadership studies courses also used the same group of discussion-based pedagogies quite frequently (class discussion, M = 4.72, SD = 0.50; interactive lecture/discussion, M = 4.15, SD = 0.90; and small group discussion, M = 3.91, SD = 0.96), but used self-assessments & instruments (M = 3.54, SD = 1.06) and media clips (M = 3.48, SD = 0.88) more often than Case Studies (M = 3.23, SD = 0.99). Equally, the UL instructors avoided the same highly experiential instructional strategies, but used service learning (M = 2.69, SD = 1.48) more often, with debates (M = 2.47, SD = 1.04) and simulation (M = 2.41, SD = 1.12) used most infrequently.
Comparing Instructional Strategy Use Between GL and UL Instructors. The two samples were identified as Group 1 and Group 2 for means comparison using an independent t– test. Group 1 (n = 272) represented the instructors who taught UL face-to-face leadership studies courses and Group 2 (n = 350) represented the sample of graduate instructors. The independent t-test method was selected as the primary statistical analysis to compare the means of responses in both groups. Since the sample sizes in the two groups are different, a pooled variance was computed. With the results obtained from the independent-groups t-test analysis to compare the 19 instructional strategies used in the two instructor groups, the researcher produced the necessary statistics for comparison. The independent t-test statistics show that the p-values for all t statistics were insignificant (p > .05) except for the following instructional strategies where, on average:
GL instructors used small group discussion (M = 4.12, SD = 0.89) more frequently than UL instructors (M = 3.91, SD = 0.96). This difference, 0.21, was significant t(620) = 2.76, p = .006, and represented a small effect, d = 0.22.
GL instructors used case studies (M = 3.67, SD = 0.99) more frequently than UL instructors (M = 3.23, SD = 0.99). This difference, 0.43, was significant t(620) = 5.41, p = .000, and represented a medium effect, d = 0.43.
GL instructors used problem-based learning (M = 3.38, SD = 1.11) more frequently than UL instructors (M = 3.15, SD = 1.06). This difference, .23, was significant t(620) = 2.60, p = .010, and represented a small effect, d = 0.21.
GL instructors used stories or storytelling (M = 2.94, SD = 1.27) more frequently than UL instructors (M = 2.64, SD = 1.18). This difference, .30, was significant t(620) = 3.06, p = .002, and represented a small effect, d = 0.25.
GL instructors used debates (M = 2.67, SD = 1.24) more frequently than UL instructors (M = 2.47, SD = 1.04). This difference, .20, was significant t(528) = 2.17, p = .034, and represented a small effect, d = 0.19.
GL instructors used games (M = 2.35, SD = 1.15) less frequently than UL instructors (M = 2.67, SD = 1.06). This difference, -0.33, was significant t(558) = -3.66, p = .000, and represented a medium effect, d = -0.31.
GL instructors used in-class short writing (M = 2.26, SD = 1.03) less frequently than UL instructors (M = 2.65, SD = 1.16). This difference, -0.39, was significant t(605) = -4.27, p = .000, and represented a medium effect, d = -0.35.
GL instructors used service learning (M = 1.88, SD = 1.13) far less frequently than UL instructors (M = 2.69, SD = 1.48). This difference, -0.81, was significant t(620) = -7.48, p = .000, and represented a large effect, d = -0.60.
Assessment Strategy Use in Leadership Education. Overall, the instructors who taught GL, face-to-face leadership studies courses attached the most weight in their overall course grades to a major writing project / tem paper (M = 3.79, SD = 2.03), group projects/presentations (M = 3.26, SD = 1.75), research projects/presentations (M = 3.00, SD = 2.00), and class participation/attendance (M = 2.97, SD = 1.70). Conversely, GL instructors gave little or no weight—often excluding completely from their courses—portfolio or evidence collection (M = 1.57, SD = 1.30), video creation (M = 1.36, SD = 0.97), and quizzes (M = 1.32, SD = 0.87). In comparison, instructors who taught UL, face-to-face leadership studies courses also valued group projects/presentations (M = 3.37, SD = 1.63), but attached more weight to class participation/attendance (M = 2.92, SD = 1.46), Exams (M = 2.86, SD = 1.88), major writing project / term paper (M = 2.83, SD = 1.76), and reflective journals (M = 2.47, SD = 1.68), than research projects/presentations (M = 2.44, SD = 1.72). Similarly, UL instructors avoided video creation (M = 1.49, SD = 1.23) and quizzes (M = 1.66, SD = 1.08), adding student peer assessment (M = 1.65, SD = 1.14) to round out the bottom three.
Comparing Assessment Strategy Use Between GL and UL Instructors. As in the previous analysis, the two samples were identified as Group 1 and Group 2 for means comparison using an independent t-test. Group 1 (n = 263) represented the instructors who taught UL face-to-face leadership studies courses and Group 2 (n = 343) represented the sample of GL instructors. As noted in Table 2, there was a slight difference in the sample size of each group as participants progressed to the questions represented by the data here. Again, a pooled variance was computed. With the results obtained from the independent-groups t-test analysis to compare the 16 assessment strategies used in the two instructor groups, the researcher produced the necessary statistics for comparison. The independent t-test statistics show the p-values for all t statistics were insignificant (p > .05) except for the following assessment strategies where, on average:
GL instructors attached more weight in their courses to major writing project / term paper (M = 3.79, SD = 2.03) than UL instructors (M = 2.83, SD = 1.76). This difference, .96, was significant t(518) = 6.25, p = .000, and represented a large effect, d = 0.55.
GL instructors attached more weight in their courses to research projects/presentations (M = 3.00, SD = 2.00) than UL instructors (M = 2.44, SD = 1.72). This difference, 0.56, was significant t(517) = 3.72, p = .000, and represented a medium effect, d = 0.33.
GL instructors attached more weight in their courses to individual leadership development plans (M = 2.51, SD = 1.74) than UL instructors (M = 2.18, SD = 1.43). This difference, 0.34, was significant t(500) = 2.60, p = .012, and represented a small effect, d = 0.23.
GL instructors attached more weight in their courses to quizzes (M = 1.32, SD = 0.87) than UL instructors (M = 1.66, SD = 1.08). This difference, -0.33, was significant t(603) = -4.08, p = .000, and represented a medium effect, d = -0.33.
GL instructors attached less weight in their courses to reflective journals (M = 2.20, SD = 1.63) than UL instructors (M = 2.47, SD = 1.68). This difference, -0.27, was significant t(604) = -1.97, and represented a small effect, d = -0.16.
GL instructors attached less weight in their courses to exams (M = 2.17, SD = 1.84) than UL instructors (M = 2.86, SD = 1.88). This difference, -0.69, was significant t(604) = – 4.54, p = .000, and represented a medium effect, d = -0.37.
Until now, no research has investigated the instructional and assessment strategy use of GL and UL leadership educators on a global scale. The findings of this study suggest that discussion-based pedagogies such as class discussion, interactive lecture/discussion, and small group discussion, are used most frequently. Next, opportunities to analyze real-world issues through case studies and evaluate oneself through self-assessments & instruments were used next most frequently. In comparison to the study completed by Jenkins (2012, 2013) that reviewed instructional strategy use by leadership educators in UL face-to-face classrooms, similarities abound. Discussion-based pedagogies were used so frequently, Jenkins (2012) coined them as the “signature pedagogy” (see Shulman, 2005) in undergraduate leadership education.
Additionally, Jenkins (2012) found that the use of self-assessments & instruments and case studies was also quite frequent. Arguably, discussion-based pedagogies are also the signature pedagogy for graduate-level leadership studies. Perhaps, the primary activity of shared dialogue coupled with an inclusive environment is a shared value among leadership educators regardless of the academic level in which they teach.
This study was also the first to report on the assessment strategy uses of instructors who teach face-to-face leadership studies courses. However, Jenkins (2012, 2013) did include a few “Instructional Strategies” such as group projects/presentations and research projects/presentations that were identified in the present study as “Assessment Strategies.” Correspondingly, Jenkins (2012, 2013) found group projects/presentations and research projects/presentations to be the fourth and fifth most, respectively, used instructional strategies. In comparison, here, group projects/presentations was the heaviest weighted assessment strategy for UL instructors, but research projects/presentations was the sixth heaviest. These findings suggest the value leadership educators place on students’ ability to present information and work in groups.
Furthermore, this was the first study to report on the differences between the instructional and assessment practices of GL and UL leadership educators. According to the statistical analysis, GL instructors used small group discussion, case studies, problem-based learning, stories or storytelling, and debates significantly more frequently in their teaching than UL instructors. In comparison, GL used games, in-class short writing, and service learning far less. Perhaps these choices are because of instructors’ decisions regarding their learners’ developmental readiness (see Avolio & Hannah, 2008; Dugan, 2013) or the nature or depth of content covered in their courses. Likewise, GL instructors attached significantly more weight in their courses to major writing projects / term papers, research projects/presentations, individual leadership development plans, and quizzes, than UL instructors. In comparison, GL instructors attached significantly less weight in their courses to reflective journals and exams than UL instructors. Conceivably one might expect the rigor of intensive writing in graduate work, however, the lesser value associated with reflective journals is counterweight to the literature promoting the association between leadership development and reflection (e.g., Densten & Gray, 2001). Or perhaps, leadership educators are in fact using reflection in their teaching, but in ungraded activities and experiences.
Implications for Practice
This study was undertaken with the vision that leadership educators in GL and UL academic paradigms who teach face-to-face courses as well as academic and student affairs administrators seeking to further understand the types of teaching and learning present in the leadership classroom would find it pragmatic. Moreover, this exploratory study of instructional and assessment strategy use has implications for practice for a variety of stakeholders who seek to advance teaching and learning in GL and UL leadership education globally. Moreover, these findings have implications for the discipline. For example, the findings could provide a foundation for the curriculum of leadership-focused workshops and professional conference sessions. Further, findings from this study may catalyze ideas for innovations to the way leadership is taught or promote focused research on the use and best practices of the most frequently used instructional and assessment strategies.
Instructional Strategy Use. While this study provided a first look at GL and UL instructional strategy use in leadership education, the findings are not far different than Jenkins (2012, 2013) study. According to Andenoro et al. (2013), “Leadership Education is the pedagogical practice of facilitating leadership learning in an effort to build human capacity and is informed by leadership theory and research. It values and is inclusive of both curricular and co- curricular educational contexts” (p. 6). Hence, leadership educators facilitate discussion in the classroom to provide an inclusive environment and draw from the perspectives of students. Further, the use of case studies to provide real-world context and relevance as well as self- assessments & instruments to allow for more personalized exploration are abundant. And while the middle-of-the-road use of lecture is encouraging, the infrequent use of highly experiential activities such as role play activities, debates, simulation, and games is concerning. Moreover, what are the justifications GL leadership educators might provide for their lack of in-class short writing or the significant absence of a powerful pedagogy like service learning? Are GL leadership educators assuming that their students have already engaged in service learning or have other opportunities not associated with their academic pursuits to do so? If not, students who complete graduate leadership programs may have far fewer experiences working in and with the community. Perhaps the service learning and civic engagement programming often housed in student affairs and associated with undergraduate educations needs to extend its resources to graduate education as well.
Assessment Strategy Use. Prior to this study, little was known beyond anecdotal evidence about the value leadership educators place on graded assignments in their courses. Even so, the findings from this study may represent requirements for assessment driven by situational factors (see Fink, 2013) such as instructional, departmental, or program level mandates, and not by the instructors. Thus, it might not be the case that the value leadership educators place on particular assessment strategies is accurately captured. The findings from this study suggest the heavy emphasis on the major writing projects / term papers often associated with GL work also rings true in the leadership discipline. However, the heavy weight associated with group projects/presentations is comparably alike between GL and UL instructors, as is the percentage of a student’s grade attached to class participation/attendance. And while it is important to come to class and engage in discussion, how will leadership educators assess students’ ability to lead? According to the findings of this study, more value is associated with one’s ability to reflect and learn from others (e.g., reflective journals, self-evaluation, observation/interview of a leader, student peer assessment) than on one’s skill demonstration. Arguably, while a significant association between leadership development and reflection is understood (i.e., Densten & Gray, 2001; Guthrie & Jones, 2012), an “assessment impasse” exists with respect to leadership performance as posited by Bass (1985), Heifetz and Laurie (1997), Mumford, Friedrich, Caughron, and Antes (2009), Peck, Freeman, Six, and Dickinson (2009), and Allen and Roberts (2011). How will leadership educators respond to the pressures to assess students’ leadership effectiveness outside the classroom? As suggested by Andenoro et al. (2013, p. 9), “As innovative and learner-centered pedagogical approaches are being used in the field of Leadership Education, empirical research on such approaches is needed to gain more useful knowledge beyond utilization and instead on effective and engaging ways of teaching that meet educational objectives.” Another aim of this study was to provide a foundation for this work.
The leadership discipline is young and little is known about the classroom environments of leadership educators. Empirical data from this study provides new knowledge for the discipline. For example, we can describe the leadership classroom to stakeholders and others outside the discipline. Fittingly, one might portray a leadership classroom engaged in discussion—be it with the instructor or in small groups—perhaps dialoguing about a case study, a recently completed self-assessment or instrument, or even reflective writing. Students may be deeply immersed as spectators of their peers’ group presentations. Yet, this is only an informed conjecture. As Andenoro et al. (2013, p. 7), suggest, “To fully understand the leader, follower, and learner, it is essential to gain holistic perspective of their feelings and perceptions,” “…engage in research methods that collect rich data on student and faculty experiences in Leadership Education… and the effectiveness of instructional and assessment strategies.” Ideally, future, qualitative research grounded in naturalistic inquiry, such as observing students and instructors participating in leadership education, interviewing this group about their experiences teaching in or completing a leadership program, or facilitating focus groups with students and instructors about their experiences and preferences with instructional and assessment strategies, succeeds this study.
The purpose of this study was to identify the instructional and assessment strategies used most frequently by leadership educators and compare the use between GL and UL instructors. In the absence of any prior studies these findings provided insight in the current state of leadership education globally. At the macro level, the researcher hopes that the findings from this study will be pragmatic, aiding stakeholders in designing curriculum and evaluating leadership programs. Moreover, it is chief aim of this research that future scholars design workshops and conference sessions, author books and articles, and provide professional development opportunities for leadership educators inclusive of the instructional and assessment strategies surveyed here. Additionally, the findings from this study offer shared attributes across global borders and educational levels that may better describe the practice of leadership education. At the micro level, the findings from this study may contribute to the design of leadership program policies, provide impetus for new research, and contribute to the existing body of literature.
Allen, S. J., & Hartman, N. S. (2008a). Leadership development: An exploration of sources of learning. SAM Advanced Management Journal, 73(1), 10–19, 62.
Allen, S. J., & Hartman, N. S. (2008b). Sources of learning: An exploratory study. Organization Development Journal, 26(2), 75–87.
Allen, S. J., & Hartman, N. S. (2009). Sources of learning in student leadership development programming. Journal of Leadership Studies, 3(3), 6-16.
Allen, S. J., & Roberts, D. C. (2011). Our response to the question: Next steps in clarifying the language of leadership learning. Journal of Leadership Studies,5(2), 65-70.
Andenoro, A. C., Allen, S. J., Haber-Curran, P., Jenkins, D. M., Sowcik, M., Dugan, J. P., & Osteen, L. (2013). National leadership education research agenda 2013-2018: Providing strategic direction for the field of leadership education. Retrieved from Association of Leadership Educators website: http://leadershipeducators.org/ResearchAgenda
Andres, L. (2012). Designing and doing survey research. London: Sage.
Crawford, C. B., Brungardt, C. L., Scott, R. F., & Gould, L. V. (2002). Graduate programs in organizational leadership: A review of programs, faculty, costs, and delivery methods. Journal of Leadership & Organizational Studies, 8(4), 64-74. http://dx.doi.org/10.1177/107179190200800406
Jenkins, D. M. &, Allen, S. J. (2012, December). What Doesn’t Work in Undergraduate Leadership Education. Workshop presented at the 2012 Leadership Educators Institute. The Ohio State University, Columbus, OH.
Moorhead, G., & Griffin, R. W. (2010). Organizational behavior: Managing people and organizations. Mason, OH: South-Western.
Mumford, M. D., Friedrich, T. L., Caughron, J. J., & Antes, A. L. (2009). Leadership development and assessment: Describing and rethinking the state of the art. In K. A. Ericsson (Ed.), The development of professional expertise: Toward measurement of expert performance and design of optimal learning environments (pp. 84-107). New York, NY: Cambridge University Press.
Pendse, R., & Johnson, E. (1996, November). Teaching an undergraduate class vs. graduate class: is there a difference?. In Frontiers in Education Conference, 1996. FIE’96. 26th Annual Conference., Proceedings of (Vol. 1, pp. 59-62). IEEE.
Perry, W. (1997). Cognitive and ethical growth: The making of meaning. College student development and academic life, 4, 48-116.
Pew, S. (2007). Andragogy and Pedagogy as Foundational Theory for Student Motivation in Higher Education. InSight: a collection of faculty scholarship, 2, 14-25.
Plemmons, J. K. (2006). Application of Pedagogy or Andragogy: Understanding the differences between student and adult learners. Proceedings of the 2006 Southeastern Section Meeting of the American Society for Engineering Education Conference. Tuscaloosa, AL.
Riggio, R. E., Ciulla, J., & Sorenson, G. (2003). Leadership Education at the undergraduate level: A liberal arts approach to leadership development. In S.E. Murphy and R.E. Riggio (Eds.). The future of leadership development. (pp. 223-236). Mahwah, NJ: Lawrence Erlbaum Associates.
Shulman, L. S. (2005). Signature pedagogies in the disciplines. Daedalus, 134(3), 52-59.
Schmidt-Wilk, J. (2010). Signature pedagogy: A framework for thinking about management education. Journal of Management Education, 34(4), 491-495.
Sogurno, O. A. (2003). Efficacy of role-playing pedagogy in training leaders: Some reflections. Journal of Management Development, 23(4), 355-371.
Stewart, A. C., Houghton, S. M., & Rogers, P. R. (2012). Instructional Design, Active Learning, and Student Performance Using a Trading Room to Teach Strategy. Journal of Management Education, 36(6), 753-776.
Wren, J. T. (1994). Teaching leadership: The art of the possible. Journal of Leadership & Organizational Studies, 1(2), 73-93.
Yoshimoto, K. (2002) Higher education and the transition to work in Japan compared with Europe in: J. Enders & O. Fultoin (Eds.). Higher Education in a Globalising World (Dordrecht, Kluwer), pp. 221–240.
Yoshimoto, K., Inenaga, Y., & Yamada, H. (2007). Pedagogy and andragogy in higher education—A comparison between Germany, the UK and Japan. European Journal of Education, 42(1), 75-98.
Yukl, G. (2006). Leadership in organizations (6th ed.). Saddle River, NJ: Pearson Education.
Zimmerman-Oster, K., & Burkhardt, J. C. (1999). Leadership in the making: Impact and insights from leadership development programs in U. S. colleges and universities. Battle Creek, MI: W. K. Kellogg Foundation.
Dan Jenkins, Ph.D., is Director and Associate Professor of Leadership & Organizational Studies at the University of Southern Maine. He received his doctorate in Curriculum & Instruction from the University of South Florida. Dan has published more than 30 articles and facilitated dozens of workshops around the world on leadership education, pedagogy, curriculum, and course design. Dan is a past chair of the ILA Leadership Education Member Interest Group, co-chair of the ILA Leadership Education Academy, and former Secretary of the Association of Leadership Educators. He can be reached at email@example.com.