Introduction
Program reviews conducted by experienced faculty and staff members are standard practice on college campuses. In recent years we have seen an increase from “nearly 1,000” (Brungardt et al., 2006) to more than 1,500 leadership programs in the United States (Guthrie et al., 2018). Yet, the organizational home for such programs varies both within and across institutions (Guthrie & Jenkins, 2018). As a result, leadership program review processes are not consistent or standard. Further, little research exists that specifically explores program reviews for leadership education. This project seeks to fill the gap in our understanding of leadership program reviews.
The purpose of this comparative case study is to better understand the process and outcomes of leadership program reviews in higher education through the lens of leadership education professionals who have served as program reviewers. The research questions guiding this study include: (a) What encompasses a leadership program review in higher education? and (b) What are some experience-based practices for facilitating leadership program reviews in higher education? In conducting this study, we hope to provide insight into the logistics and outcomes of the review process, reviewer experiences, and lessons learned. This information can help the field of leadership education establish more consistent program review practices and provide valuable information for faculty and staff members looking to invite others to review their program and/or who are asked to conduct a program review themselves.
Background
Program review has been a standard in higher education for centuries (Conrad & Wilson, 1985; Kuh et al., 2014). For many academic disciplines, this process is well documented and relatively consistent (Ewell et al., 2011). Yet, for leadership education programs and initiatives in higher education that vary in institutional home, span both curricular and co-curricular contexts, and are informed by a variety of disciplinary knowledge, existing program review literature and guidance falls short in helping educators understand the process and purpose of such reviews. This complexity is intensified, as even the area where leadership programs are found in higher education varies considerably from institution to institution (Guthrie & Jenkins, 2018; Owen 2012). While the vast majority of leadership programs are aligned with academic and student affairs departments, the disciplinary and organizational “home” varies considerably within institutions (Guthrie et al., 2018). Academic leadership programs are found amongst a wide variety of academic colleges, departments, and programs including business or management, education, agriculture, and various iterations of leadership (e.g., leadership studies, organizational leadership, etc.) (Jenkins, 2016; Jenkins & Dugan, 2013). Moreover, academic leadership programs may be found under the auspices of a President’s or Provost’s Office or through partnerships with student affairs (Buschlen & Guthrie, 2014; Harvey & Jenkins, 2014). In the same way, co-curricular leadership programs may be found in dedicated student centers, within student activities, residence life, or as part of other institutional or programmatic initiatives related to the student experience (Jenkins & Owen, 2016; Rocco & Pelletier, 2019). Consequently, the inconsistency in institutional home for leadership programs creates myriad challenges for reviewers.
Like their faculty and staff counterparts, program administrators within curricular and co-curricular leadership programs are faced regularly with the burden of assessment (Goertzen, 2009). Whether the requested activity to review a leadership program arises from institutional leadership, in the pursuit of resources or reorganization, or as part of a cycled process, conducting program reviews has become commonplace (Perruci & McManus, 2012; Sowcik et al., 2013). And while leadership program stakeholders have a variety of resources at their disposal such as the Council for the Advancement of Standards in Higher Education (CAS Standards), International Leadership Association (ILA) Guiding Questions: Guidelines for Leadership Education Programs (2009), The Handbook for Student Leadership Development (Komives et al., 2011), and recent scholarship specifically on the topic of leadership program assessment (e.g., Nobbe & Soria, 2016), no clear guidance exists with respect to the process or shared outcomes of leadership program reviews (Ritch, 2013). Instead, there are ongoing debates within and among professional associations regarding the creation of accreditation, certification, guidelines, standards, principles, or some other kind of formalized program review as a way to answer these questions and as a path to reach a more legitimate standing in the academy (ILA General Principles Task Force, 2021; Kellerman, 2018a, 2018b; Perruci & McManus, 2012; Ritch, 2013; Ritch & Roberts, 2005; Ritch et al., 2004). As a result, “questions of legitimacy and accountability persist,” and “more and more educators search for answers” (Ritch, 2013, p. 66). And, while there are artifacts of evidence-based practices and recommendations for future resources (e.g., Guthrie & Jenkins, 2018; Jenkins et al., 2012) as well as vast association initiatives currently underway to address these gaps, such as the ILA’s General Principles Task Force (GPTF) (ILA GPTF, 2021), more information is needed to address the gaps related to the criteria and processes used to facilitate leadership program reviews.
Method
Understanding the nature of leadership program reviews requires insight from those who have been a part of these endeavors. As such, we chose to seek out individuals with program review experience within the field of leadership education as the source of data for our study. Each participant’s comprehensive experience facilitating program reviews was considered a case. We employed a comparative case study approach to search for themes and patterns within and across program reviewers’ experiences (Merriam, 1998). Findings are presented as an examination of multiple, individual cases compared to one another to further inform understanding of the phenomena studied. Comparative case studies provide the opportunity for deeper and more complex interpretation than what can be gleaned from a single case example. Further, finding themes, patterns, and even contradictions in a range of cases strengthens the precision, validity, and stability of the interpretation (Merriam, 1998; Miles & Huberman, 1994). Accordingly, the comparative case study approach provides an appropriate guide for better understanding leadership program reviewing through the lens of program reviewers.
Sampling and Participants. While program reviews are quite common in higher education, our literature review reveals that reviews specific to curricular and co-curricular leadership programs are rare. Often leadership programs are reviewed as a part of larger departmental or unit reviews, with reviewers who may or may not identify professionally as leadership educators or have experience working with leadership programs specifically (Sowcik et al., 2013). Consequently, we sought to identify professionals for this study who had experience serving as reviewers for leadership programs as a main or major focus. With such a niche population in mind, we employed a combination of purposive and recommender sampling. Purposive sampling involves researchers using their own professional judgment in participant selection (Creswell & Poth, 2018). As leadership educators with reviewer experience ourselves, we first sought out colleagues whom we know have served as leadership program reviewers. A snowball sample requires that researchers identify cases of interest from people who know of information-rich cases (Creswell & Poth, 2018). For this study, we turned to leadership educators in our professional networks and professional associations for recommendations on those who would match our participant needs. Sampling yielded 13 diverse participants ranging in social identity as well as professional roles and years of experience in leadership and higher education in the United States. Participants’ professional roles included clinical and tenured faculty, leadership center directors, and academic and student affairs administrators (see Table 1). Participants had facilitated no less than two program reviews each. Additionally, participants are members of, and in many cases serve in leadership roles for, multiple professional associations and entities that serve leadership educators in higher education, including the ILA, Association of Leadership Educators, National Clearinghouse for Leadership Programs (NCLP), NASPA Student Affairs Administrators in Higher Education, and ACPA College Student Educators International, among others. Participants have all contributed to leadership education scholarship and been featured scholars and facilitators at leadership educator professional development experiences.
Table 1
Participants
Code |
Current Position |
B1 | Clinical Faculty: Educational Studies |
G1 | Tenured Faculty: Leadership Studies |
E1 | Program Director: Diversity, Inclusion, & Leadership |
S1 | Clinical Faculty and Program Director: Leadership Studies |
R1 | Tenured Faculty: Leadership and Organizational Psychology |
P1 | Tenured Faculty and Administrator: Leadership |
S2 | Program Director: Leadership |
O1 | Senior Administrator: Campus Life |
G2 | Tenured Faculty: Higher Education |
O2 | Tenured Faculty: Leadership Studies |
R2 | Consultant |
K1 | Tenured Faculty: Student Affairs |
M1 | Academic Administrator: Leadership and Service |
Note: R2 previously served in the capacity of a senior academic and student affairs administrator.
Data Collection & Analysis. Participants were interviewed between January and March 2020 using Zoom web conferencing. A semi-structured, narrative interview format was used with questions/prompts provided to participants in advance (see Appendix). Interviews took the form of facilitated dialogue, focusing on participant narratives and reflections from their review experiences over time (Glesne, 2011). Each interview was scheduled for 90 minutes and the average interview lasted 75 minutes. Both researchers were present for and participated in all interviews. Interviews were recorded with participant knowledge and consent. No deception was involved in this study, participant identities were kept confidential, and data was securely stored on a password-protected computer.
Analytic memos were taken throughout each participant interview. Each interview was also transcribed verbatim using transcription software. As the focus of this study was to uncover and explore salient experiences, lessons, and reflections from each participant, a narrative analysis of interview transcripts and memos were conducted to uncover themes within and across participants’ stories (Glesne, 2011). Initial transcript and memo review led to a set of initial codes that addressed general theme categories, including (a) review focus, reason, and purpose; (b) logistical processes and components; (c) reviewing resources used; (d) review findings and reporting; (e) contextual influences; and (f) reviewer lessons. NVivo qualitative analysis software was used (e.g., word cloud, text analysis, and word frequency) to confirm initial codes and assist in determining further subcodes for a deeper examination of insights shared by participants within each theme area. Axial codes were also created to further organize insights (Saldana, 2013). Patterns and themes were revisited and refined throughout the analytic process, which included a second round of transcript reviews and consultation of original audio files.
It is important to acknowledge our positionality as researchers in the discussion of the analytic process, as this provides insight into our interpretation of the data and interest in the findings (Merriam, 1998). Both of us identify as leadership educators in higher education. We have also engaged in leadership program reviews as participants and reviewers. Our interest in this topic stems from professional experience in addition to general discourse in leadership education professional circles regarding the variance in leadership program review experiences. Noting this experience and perspective, we utilized a consistent interview protocol across interviews and were careful to transcribe and code interviews prior to analysis. A standard coding scheme was also maintained throughout the analysis. Each of these measures helps to limit bias and ensure a systemic, consistent analytic process (Merriam, 1998; Miles & Huberman, 1994; Saldana, 2013; Yin, 1994). Moreover, these measures help to improve the validity and reliability of the method employed here.
Findings
Findings for this study are categorized into four major themes found across participants’ comprehensive program reviewing experiences including (a) review logistics, including the stated focus and reason of a program review, as well as the outlined process of conducting the review; (b) reviewer experience, including the various roles played by a reviewer, resources used to complete the review and contextual influences on the reviewer’s approach; (c) review outcomes, including both findings from the review process and resulting recommendations; and (d) lessons learned, including necessary reviewer skills, logistical advice for conducting reviews, power considerations, and political dynamics in a review process.
Review Logistics. Participants reflected on the logistical aspects of their various reviewing experiences including the components and design of program reviews, the expressed purpose and focus, and both planned and unplanned aspects of the review process.
Review Purpose and Focus. Study participants noted that institutions sought to conduct leadership program reviews for a variety of reasons, both stated and unstated. We will discuss the more subverted reasons for the review cited by participants later in this study. Though, in being formally introduced to a review opportunity by an institutional representative, reviewers were provided with a range of reasons, from examples such as external accreditation requirements from an affiliated academic unit to internal requests by affiliated faculty and staff embarking on curriculum and/or organization redesign initiatives.
In some cases, regional accreditation, institutional review cycles, or other general institutional requirements were the motivation for a program review. G1, for example, tended to review academic leadership programs, and noted how accreditation was routine for some, stating “they had to do it every five years, I think their school required them all to.” While for others, budgetary adjustments and organizational realignment prompted the program review. As G1 notes about one review experience:
They were facing some budgetary issues and their program is very big, but they didn’t have any dedicated full-time faculty, and… they were sort of hanging out there. And so, there were departments trying to claim them, I think. And so, it may have been sort of a hostile takeover actually, and I think the provost required them to do accrediting.
Outside of formal or routine accreditation, leadership program reviews can have varied purposes based on the needs and wants of those requesting the review. S1 notes that reviews they conducted “weren’t necessarily formal.” O1 shares examples of more informal review requests, indicating that an internally driven review can come at the request of faculty and staff who “just want to learn” or believe a review will assist an office “trying to get some more leverage for resources or opportunities.” Other participants shared similar insights, noting that those requesting the review were seeking advice for program redesign, new initiatives, or identifying areas for potential partnerships. As R1 reflects: “either I’m going there because they’re in the creation stage or I’m going there because they want some insights into what they already have built and how they can go further.” Other participants noted that review requests were motivated by the need for strategic intervention, such as identifying and resolving particular issues or challenges associated with reorganization.
In a further demonstration of the variant nature of leadership program reviews, participants reflected on the leadership program types they had been asked to review. These included academic majors and minors, certificate programs, co-curricular units focused on leadership programs, co-curricular units including leadership and other programmatic types, and institutional leadership centers. For example, S1 shares a wide range from their experience:
I’ve certainly reviewed a number where their primary focus is leadership education in both student development and academic realms. I have also been a part of a couple that have a broader student activities framework. You know, not just leadership, community service, or leadership and civic engagement. Or, just a leadership office within career centers. … One was directly connected to a student union and broader programmatic agenda… others in the group might be looking at it from a facilities perspective.
As S1’s statement alludes to, leadership programs are rarely housed in a single academic or administrative function at an institution; rather, they exist across departments and units serving varied purposes that align with missions of their institutional homes (Rocco & Pelletier, 2019). Organizational homes and structures for leadership programs also vary across institutions. Participants’ statements, such as R1’s below, reflected this variety, noting that organizational home and program mission informed review purpose and goals:
The other thing, too, is where’s leadership housed? … I used the ‘1000 points of light’ kind of metaphor… I said, you know, it’s really extraordinary here that there’s leadership going on in athletics, and in you know, communication and in business, and, you know, but it’s- there’s nothing connecting them together. And so, what we need is the network to tie the lights together.
R1’s example, shared through a common metaphor, calls out this common decentralization, reflecting on how the variety of organizational homes for leadership programs at one institution can have real implications for reviewers.
Review Process. Participants also shared experiences in the review process and format, specifically regarding data gathering and engagement with program personnel. First, most participants shared a timeline of one to three days visiting in-person on campus. Campus time is spent in meetings and conversations with stakeholders; the primary component of the review process. Stakeholders could include faculty, administration, staff, students, donors, community members, and/or alumni. In most cases, meetings occurred on campus and required reviewers to travel, in some cases staying overnight to conduct review activities over multiple days.
Meetings with stakeholders included standard listening sessions to understand diverse perspectives on program components and personnel, including strengths, weaknesses, and potential opportunities. Though, participants named that their time visiting with stakeholders could also take the form of more collaborative design or brainstorming sessions in which they were more actively facilitating a mutual planning process alongside stakeholders. As K1 notes from many review experiences, “[they] have brought me in to speak and then meet with staff and meet with key people.” In this way, reviewers are welcomed, and in some cases expected to share their professional experience and knowledge regarding program design.
For example, some participants in this study were asked to share best practices from other program reviews they had conducted, benchmark programs they were aware of, or places they had worked. M1 reflects on a program review experience that included a workshop on best practices beyond the traditional review process:
I ended up teaching a workshop…. they did a review of other programs and kept coming back to our program saying, we want to do something like this within our school of business. … I did spend time with the committee… with some students and some staff members.
This reviewer’s experience echoes the variance in expectations for those conducting leadership program views, going beyond what might be considered standard in review processes generally in higher education.
Reviewer Experience. This section discusses various approaches participants take in conducting a review, and the varied roles, expectations, and responsibilities they assume during the process. Reflections here are insights from reviewers’ observations during various reviewing experiences and decisions made by reviewers using their knowledge and expertise during reviews; not necessarily what was planned or outlined in the review design. Participants reflected on variation and nuance in the type of work performed, evaluative resources utilized, and contextual influences that affected their work-in-process.
Reviewer Roles. While the role of program reviewer may seem standard on the surface, participants in this study discussed how the reviewer role could adjust and adapt based on review needs and/or institutional expectations. As mentioned in the findings regarding the review process, participants often found themselves using professional knowledge, experience, and skill within the context of a review to assist with activities beyond data gathering and analysis. In addition to the reviewer role, participants discussed serving in capacities such as consultant, facilitator, trainer, expert, fundraiser, keynote speaker, therapist, and messenger. At times, expectations for reviewers to serve other roles were explicit in the initial request for services. O2 recalls a time where multiple roles were negotiated upfront:
Well, it’s hard. I think there’s a value story here… I will have schools come to me and say, ‘well, we want you to teach Strengths,’ or ‘we want you to come teach the Five Practices.’ So, I’m always like, ‘I will, but only if you let me do this as well,’ or ‘I want to be part of this bigger conversation.’ Like, I won’t just ‘dance monkey dance.’
O2’s experience underscores the agency necessary for reviewers in determining the work they will do and roles they will play as a part of a review request when given the opportunity to discuss the terms. Though, in some cases institutions may ask for a program review, when– whether it is clear to them or not– they are also looking for a consultant or advisor. R1 recalls an example in which the role of keynote speaker was unexpectedly added to their reviewer responsibilities:
It was like, the whole board, and everything was a big dinner, and the President, you know, kind of introduced me, and I didn’t even know I was going to speak until he introduced me. And, I’m like writing on the napkin when I realized. Fortunately, I had the one copy of the competency model, so I held it up and explained it, but that wasn’t as effective.
While this additional responsibility was placed upon R1 by the institution, role changes can also come from the reviewer themselves. Participants in this study noted that at times they adapted their reviewer role mid-review based on what they were learning or experiencing in their meetings. Whether planned or spontaneous, externally or internally driven, the expandable and dynamic nature of the reviewer role was clear.
Reviewer Resources. While participants in this study have the academic and professional experience relevant to conducting leadership program reviews, participants named additional resources they used in the review process. Commonly mentioned were association standards such as the CAS Standards (CAS, 2012) and the ILA’s Guiding Questions: Guidelines for Leadership Education Programs (ILA, 2009). Association documents such as these can be used before, during, and after reviews to help frame and focus the review process and findings. For example, P1 shares that they have “used the Guiding Questions as a resource beforehand… as a framework for the [institution] to build their self-study.” S2 reflects on using the CAS Standards throughout a review process:
Take those standards in the self-assessment guide and have a lot of different people do [the self-assessment]. And then, really look at that, and then for them to provide evidence. They weren’t really able to provide any evidence, and so when we went down there and interviewed people, we kind of asked some more pointed questions around the areas [in the standards.]
Some reviewers also turned to program models from their own experience or that are discussed in the literature as benchmarks for particular leadership program types. An institution’s benchmark schools were also looked to for guidance, particularly around chosen leadership frameworks or program scope. O2 discusses the experience working with a Catholic Institution on choosing benchmark schools and program type:
I think, letting schools pick their comparison institutions… When the Catholic schools got together to share data they started realizing again, like with a social justice tradition and Catholic universities, the Social Change Model was a natural fit. So, it was a values-based kind of choice.
In addition to Institutional type, O2’s mention of the Social Change Model (SCM) of Leadership Development (HERI, 1996) alludes to another common participant resource for conducting program reviews: leadership education literature, research, and models. Resources such as the Handbook for Student Leadership Programs (Komives et al., 2011), The Role of Leadership Educators: Transforming Learning (Guthrie & Jenkins, 2018), the New Directions for Student Leadership series (Komives & Guthrie, 2015-2021), and the Multi-Institutional Study of Leadership were all named by multiple participants in this study.
Participants also indicate that program reviews incorporate data gathered from the institution directly. This could include program documents regarding design and curricula, reviewer research on institutional mission, or even as mentioned above, a personnel self-study with questions provided by the reviewer in advance of a campus visit. As O1 states, “sometimes you’ll get kind of elaborate campus-based plans that you receive beforehand or assessments, pre-assessments,” indicating that those responsible for the leadership program may have already completed an internal review that is shared with external reviewers in advance.
Contextual Influences. A reviewer’s approach to conducting a leadership program review may also be informed by a number of contextual factors, including the program’s theoretical and conceptual grounding, institutional orientation, political and financial considerations, and/or administrative influences. These factors influence the way that leadership is understood and approached by the institution through its leadership program(s). As G2 notes: “contextual factors influence everything, and that’s where we need to start. And I think that’s often the misstep… not taking those contextual factors.” Participants in this study all shared reflections on the careful thought required to determine the appropriate guidelines, standards, literature, examples, and experiences they use to help ensure that each review they conduct is appropriate for and relevant to the specific program. Additionally, all participants in this study noted that the definitions, theories, and models of leadership used to construct and inform the implementation of a leadership program are key in framing their review process. A variety of factors can influence a program’s approach to leadership. For example, a program’s orientation toward leadership learning, skill-building, or practice will guide reviewers to the resources they use, questions they ask, and recommendations they offer in the review process. Whether a program focuses on individual leader development or leadership as a process in groups or communities is also a consideration. For example, leadership language within a program impacts the review process and the reviewer’s approach.
Participants also cited a program’s disciplinary orientation as an important context for the review. A leadership program could be rooted in a particular discipline, incorporate leadership approaches from a variety of disciplines (interdisciplinary), or use an approach applicable across disciplines (transdisciplinary). Participants note that disciplinary orientation likely informs theoretical and conceptual grounding around leadership. For example, some participants discussed that a leadership program could be grounded in human development and learning processes, while another may focus on leadership outputs and performance. R1 shares a reflection on the variance in disciplinary approaches to leadership programs:
We’re looking at different disciplines that were offering [leadership] programs and the differences that emerged … this gap between like, ‘what we think we’re doing’ and ‘what we’re actually doing’ … but we looked at like {a healthcare organization] and the military, but we also looked at agriculture programs, business programs, liberal arts programs… that’s part of the difficulty–they’re housed in different places and those places have their own cultures and practices.
Study participants noted that institutional factors such as mission, values, and budget influence leadership program approach and purpose, which then impacted their approach to the review. G2 illustrates the impact of these factors in her reflection:
I will even ask, ‘how does your institution view or define leadership?’ And they’ll be like ‘what do you mean?’ And so, I’ll pull up their institutional mission statement and it talks about leadership … And I said, ‘Do you realize that?’ And they’re like, ‘no, we didn’t.’ Right? Like, not at all. And I said ‘so, my question for you is, ‘how does that influence, or does it not influence?” because then that will change how I approach this. Especially with budgeting and finance because there are some institutions that will ask me to look at their budget and say, ‘Do you think we’re spending enough money on this, or do you think there’s more?’ And then there’s other institutions that are like we can’t give you that information. But I mean, it does. I mean, I think it’s everything! That’s where we should all be starting, not only with evaluation, but implementation.
The relevance and applicability of a reviewer’s work are dependent upon their access to and understanding of a variety of contextual factors that influence the purpose, design, and implementation of a leadership program.
Review Outcomes. This section includes participants’ reflections on deliverables and feedback associated with review experiences. Participants shared what they were asked to produce as part of the review process and how they determined effective ways to communicate recommendations.
Findings and Recommendations. Participants shared that the recommendations they offered to host institutions primarily spoke to programmatic needs such as resources, staffing, or organizational structure. They may have also included references to leadership models and approaches, programs or initiatives at benchmark institutions, and advice on creation or revision of mission statements. For example, P1 recalled sharing recommendations related to curriculum, program home, and personnel:
I make lists in terms of resource use and allocation. Second… any kind of curricular or course deficiencies or the strength of a particular curriculum. … Another recommendation that’s common is to put the program in the context of the greater Leadership Studies community [on campus]… There are also titles that have come up. And so, if a person is considered a coordinator, what’s the difference between moving that person from coordinator to director or executive director? Does the executive or director have to be a faculty member? Does it have to be someone with a doctorate or can it be somebody with a masters from coming from the staff side? … where [the leadership program] should be housed… that’s a common question that I get.
R1 focused their recommendations on the organizational structure in addition to mission and vision of the program:
A lot of what I talk about is the same kinds of stuff I would talk to a startup organization about, like, put time into your mission. …That mission should guide you, right? …You should have a vision statement, a mission statement … then I do talk about evaluation… I seek input and feedback for continuous improvement… evaluating everything you do.
In addition to reviewing staff and curriculum, some participants were asked to evaluate specific models used, how models are incorporated, or specific program deliverables. O2 shares an example of a specific model here:
I always try to get them to audit what they already do and map it onto their shared outcomes for each program, say, the service leadership intervention that you do. You have the Social Change Model, (but) which [model] best fits with what that (program) does? And then we look at it, and guess what? Nobody’s teaching these seven other C’s [of the model], you know? Nobody’s talking about Controversy with Civility and you wonder why you have all these problems.
Overall, participants in this study found it important to address not only current-state programmatic effectiveness in the review process, but to also offer suggestions for design revisions and program evolution.
The Report. Participants shared that in most cases they were asked to produce a written report and/or a final presentation of their major findings. These ranged “from that informal dimension all the way to written reports that evaluators filed with an accreditation review process” (K1). K1 also shared advice around the content of the report and the benefits of reviewing recommendations with an institution’s review coordinator to navigate complicated political dynamics prior to wider sharing:
I would be careful in a written report… You would then verbally process or debrief with a good person that you’re doing this for… I might write in the report that I heard numerous disparate perceptions of what the office was doing. And people see it very differently with different motives and different goals and think it’s very important that those who are central to this office’s success be on board with the direction they want the office to go in… And then maybe debrief around that same [idea] when I talk to the Associate Dean this office reports to …but I’m not in a report saying … ‘if this college doesn’t want this program in it, because we heard a number of criticisms about the program being located where it is, then there are other colleges that would be happy to have these students and that that should be pursued.’
Additionally, participants shared that they often combined reporting on the strengths of the program they were reviewing with pointing out opportunities or areas for improvement. Reviewers such as G1 shared how they tried to stay positive and helpful in their reviewer role:
I write more about, there’s opportunities here–some areas for improvement. And I try to keep it kind of positive. I don’t think it does any good to say, ‘that was really a problem’ … so many people are putting their lives into these programs and to say things that would be hurtful …I just don’t think it’s helpful.
Ultimately, participants felt that providing positive reinforcement along with more critical feedback aided in an institution’s reception of their report findings.
Lessons Learned. From their experience facilitating multiple reviews, participants in this study shared advice and insights gained over time that helped them continue to develop their capacity and effectiveness as program reviewers. The lessons shared pick up on themes from across the previously articulated insights and experiences related to personal preparation, situational awareness, and necessary skill sets.
Active Listening. Multiple participants in this study stressed the importance of active and deep listening skills for reviewers. This advice stemmed from the large number of fact-finding processes reviewers engage in and the often large number of stakeholders involved:
E1: Listening, I think, is probably the number one thing that I think we’re offering, which for me shifts it from sometimes a review, [which] can have that evaluative component… that, ‘I’m here to judge.’ … I think it’s less of that for me than to really just hold and listen and honor and then try to support them to whatever their next step is.
Listening for many was about more than just processing volume of information, but also took on a counseling component:
K1: I have never used my counseling skills more …. you do students and mentoring and all that [in a review]. There’s a whole lot of really deep listening, active listening, you have to get a basis of conflict or misunderstanding of perception.
As captured here, reviewers utilized a variety of listening skills as part of their repertoire, stressing the importance of staying active and engaged in listening activities across all phases of the review process.
Team Reviewing. Several participants shared experiences related to being part of a review team and the benefits compared to facilitating a solo review. Relationships were a key factor, and those who had the authority to choose team members reported better experiences. Markers of team effectiveness included trust to discuss complex political and personnel matters, shared decision-making, and equal distribution of workload. E1 shared the advantages of working with a strong team, particularly related to interpreting culture and context:
I would hesitate for somebody to go out and do a formal review by themselves. But some of that was just, you know, our ability to connect with these different constituents and then for us as a review team to come back and debrief that and navigate the political landscape that we’re maybe not part of but now are implicated in, depending on how we present these opinions and these thoughts. And so, we were clear that our role was not to make judgments or decisions on their behalf, but really to kind of synthesize that information.
While team reviews have advantages and challenges, participants shared honest feedback on how they navigated each arrangement.
Organization is Critical. Study participants offered advice related to organization before, during, and after the review. These recommendations included identifying a contact person for campus logistics, setting expectations around stakeholder meetings, and staying on top of scheduling and meeting plans. Participants O1 and M1 shared a few organization examples:
O1: I think clarity really matters, like what are we judging ourselves by, and that then also helps with who to meet with … there is a sweet spot between too few meetings and then like three-and-a-half days of back-to-back, which is just not helpful. … clarity of structure and then creating schedules that are aligned with that clarity is significantly important to get data that is helpful.
M1: Give yourself enough time… I just ran out of time. You know when you’re going to write a consultant’s report and that takes a lot of time, and, with [University], we went back and forth over email trying to draft a report… if you end up being on campus, you need to make sure there’s going to be students there. If you do it during spring break, and there’s nobody there to talk to except the staff members, you’ve missed the whole piece of trying to assess what’s happening there on that campus.
These are just a few of the many examples participants shared related to the crucial need for organization throughout the review process.
Facilitation Skills. Facilitating discussion and dialogue in various formats was a common theme across participants’ reviewing experiences. Participants were often asked by institutions to conduct interviews, facilitate large and small group discussions, or run focus groups. Skilled facilitation enabled participants to engage in more sophisticated sense-making around relationship dynamics. As O2 and P1 explain:
O2: I’ll spend a day just interviewing people and then I’ll meet with the VP and people at the end, sort of, say, hear the story that’s being told. And here’s where students are telling a different story than faculty or where your students… your goals or your programs are not being realized in this way. So, I’ll have that kind of narrative prospect or inquiry process.
P1: There’s infighting between different factions within the program and one faction wants to go in one direction and another faction in a very different direction. And then they bring you in to try to mediate so you’re, you’re not really doing a report, you’re doing more of facilitating–facilitation and mediation exercise.
Beyond running meetings or asking interview questions, participants called upon their keen observation, questioning, and mediation skills throughout their reviewing experiences.
Power Considerations. Several participants reflected on realizing the power that their recommendations could have on programs and people. For example, some were asked by an institution to make recommendations about financial resources and personnel. Others found themselves making recommendations that would significantly alter the future of a program. At times these heavy expectations were made clear by the institution in the review design, though sometimes these expectations emerged within the review process. S1 as R1 reflected on being conscious of their influence and the implications of recommendations:
S1: We have to be careful with our voice and our participation in these types of [reviewing] experiences because it comes with great responsibility, burden, and the whole idea of leaving your weight behind.
R1: You’ve got to be open-minded, but I think at the same time, you better have some idea of internal standards… If you’re going to say what’s right with a program, you’ve got to be able to justify what you consider to be wrong. And in order to justify being wrong with the program, you’ve got to offer guidelines for improvement.
As noted, reviewers were asked to engage in extreme mindfulness, balancing reviewer roles with a heightened awareness of the lasting effects of what was shared.
Political Dynamics. In addition to awareness of their own power, study participants also reflected on awareness of power dynamics within the institution. Many times a single insight in a review would uncover deeply complex political dynamics. Participants stressed the importance of sharp political acumen, staying humble and neutral, and listening to all stakeholders. The quotes from P1 and M1 below offer a few examples:
P1: The decision had been made to terminate the program and you were brought in and didn’t know that before you arrived and asked to explain why it needs to be terminated and that puts you in a very awkward position.
M1: Different stakeholders who may want different things. And that can be delicate… you got to ask some questions:… ‘who’s involved, who wants this, why do they want?’ You know, it’s not just one person usually. You gotta wade through it and dig deeper to find out what’s really behind it if you can. And maybe you’re never going to know.
With political dynamics often being more complex than a single review can uncover and address, participants stressed that reviewers should keep ethics at front of mind when commenting on sensitive situations and avoid getting personally involved in institutional politics.
P1: I’ve had visits, where trustees were frustrated with the administration, that the administration was not moving fast enough with this program. And so, you have an ally there. But you have to be very careful politically. So, the context there is navigating the politics of the place. And without burning bridges and recognizing that you drop in, you stir the pot, and then you leave. So, you have to be ethical in the sense that you’re doing what you think is best for the institution.
K1: I don’t think you ever take just one person’s perspective on a political situation and assume it to be valid, or the only perspective … they brought me in because they’re a friend, and they got a mess going, you know, I just still can’t assume that that’s the only view on how that happened… so you’ve got to talk tenderly to other people.
Complex institutional political dynamics require reviewers to be open-minded, aware of biases and relationships, and to engage thoughtfully.
Discussion: Experience-Based Practices for Facilitating Leadership Program Reviews
In synthesizing themes from participants’ reviewing experiences, we offer five key principles that influence leadership program review processes. These include contextual and organizational factors as well as required skill sets.
Context Matters. Institutional stakeholders initiate program reviews for a variety of reasons and reviewers should take the time to find out why. For example, institutions may be going through cycled review processes as part of accreditation, soliciting expert advice, engaging in strategic intervention, or weighing the resources used and value provided by the program–all of these factors have implications for reviewers. In addition to the nature of the review, reviewers should be aware of key contextual factors present at the institution. For example, are there non-negotiable approaches or perspectives related to how leadership is defined (e.g., SCM of Leadership Development), reliance on religious or other values, institutional or program mission statements, classifications (e.g., Carnegie, AAU), or donor influences that may make particular recommendations moot? Additionally, many participants shared examples of institutional murkiness around leadership programming due to the multiple areas within an institution where leadership programs were found. The resulting ownership, and often hierarchies, at play among stakeholders of varying leadership programs within the same institutions, was a common challenge for reviewers to navigate.
Facilitation is Key. Reviewers should expect to meet with and engage in dialogue with multiple institutional stakeholders including faculty, administration, staff, students, donors, community leaders, and alumni, among others. In doing so, preparation, organization, and regular communication with decision-makers are crucial to a successful review. Additionally, through these fact-finding meetings, reviewers should expect to engage in active listening, facilitate dialogue and small-group discussions, run focus groups or interview key stakeholders, and keep one’s political ears and eyes alert.
Expectations. Reviewers should inquire at the onset of the process about the time required, size of the review team, and scope/format of recommendations (e.g., report, presentation, etc.). Too, reviewers would be wise to ask whether or not there are benchmark programs, internal reports or self-studies, or other institutional documents or criteria that the review or recommendations may be weighed against. Additionally, reviewers should inquire whether any duties beyond the role of a typical reviewer are expected. For example, participants in our study were asked to present on leadership program best practices, their own programs, and on specific leadership models such as the SCM of Leadership Development (HERI, 1996) and The Five Practices of Exemplary Leadership (Kouzes & Posner, 2017), while others have been asked to lend a hand in program and curricular design and meet with donors.
Resources and Benchmarks. Reviewers should have access to resources such as association standards (e.g., CAS Standards, ILA Guiding Questions) as well as identify benchmark institutions and leadership programs. Attention should also be given to unique institutional and programmatic factors when determining appropriate benchmarks for a review. As reiterated by participants in this study, the variance in leadership program purpose, structure, and organizational home cannot be ignored in the review process.
Reviewer Beware. In addition to the multiple hats reviewers wear, many participants shared examples where they were asked to play a mediating role between or among institutional programs, administrators, and staff. Moreover, reviewers were asked to share recommendations related to program resources and budgets, including staffing and personnel issues in particular. Reviewers also shared experiences where what they were being asked to accomplish was outside of the scope of what may be considered standard in a higher education program review. As a result, participants in this study commented on the seriousness of the role of the reviewers as well as the implications of recommendations they might offer.
Implications for Research and Practice
Our findings outline the various contexts where individuals who facilitate leadership program reviews may find themselves working, the impetus for performing such tasks, the resources available and employed, and sound advice for practitioners. Clearly, limitations in the study exist such as the individuals selected to participate in this study, the institution and program types reviewed, and that the study focused specifically on the experiences of reviewers versus the documents produced or gathered during the reviews. Yet, there are several implications for leadership educators and program architects elicited from the themes found in this study:
- There is a clear need for professional development related to the practice of conducting leadership program reviews. This could come in the form of training or workshops offered through professional associations at conferences, through webinars or other virtual events, and through the publication of manuals that outline the process in-depth. Moreover, these resources should provide both general and specific–to curricular versus co-curricular programs–guidance.
- The vast majority of participants in our study shared that they were explicitly asked about benchmark institutions for comparative data or examples of program structure, outcomes, and programs. Arguably, more concrete and robust resources beyond conference sessions that showcase leadership programs such as “Chapter 7: Distinctive Contextual Leadership Program Examples” (Guthrie & Jenkins, 2018) and leadership program examples offered in The Handbook for Student Leadership Development (Komives et al., 2011) are needed to fill this gap.
- Examples of and artifacts from completed program reviews are desperately needed. While the individuals interviewed in this study were chosen both by the researchers and the institutions where they performed their program reviews because of their experience and expertise, keeping the intricacies of these processes secretive does more to hinder than to progress the field.
- Jenkins et al. (2012) and others have demonstrated that leadership educators and program architects are seeking out criteria from which to evaluate leadership programs. While this ongoing debate is documented in our review of the literature, it is important to draw attention to this gap. Leadership program reviewers use a variety of resources to conduct their reviews and future research is needed to evaluate the quality of these resources and effectiveness of using them for both conducting program reviews and creating recommendations for the institutions under review.
Future research and human capital are needed to identify criteria, or at the very least, guiding principles, for evaluating leadership programs. When compared to other professions such as Social Work, Medicine, or Law where clear standards, criteria for licensure, or accreditation guidelines are established, leadership programs are at a disadvantage (Kellerman, 2018a). At the same time, the diversity in leadership program purpose, design, and organizational home both within and across institutions requires review resources and guidance that address this variety. Even so, if such criteria are created and agreed upon by leadership educators, scholars, and practitioners, such progress may still not overcome the institutional and political dynamics present in the review processes experienced by participants in this study. In any event, we hope this study sparks new thinking about conducting leadership program reviews in higher education and provides resources and perspectives that future reviewers can utilize to improve their practice.
References
Brungardt, C., Greenleaf, J., Brungardt, C., & Arensdorf, J. (2006). Majoring in leadership: A review of undergraduate leadership degree programs. Journal of Leadership Education, 5(1), 4-25. https://doi.org/10.12806/v5/i1/rf1
Buschlen, E., & Guthrie, K. L. (2014). Seamless leadership learning in curricular and cocurricular facets of university life: A pragmatic approach to praxis. Journal of Leadership Studies, 7(4), 58–63. https://doi.org/10.1002/jls.21311
Conrad, C. F., & Wilson, R. F. (1985). Academic program reviews: Institutional approaches, expectations, and controversies. ASHE-ERIC Higher Education Report No. 5, 1985. Association for the Study of Higher Education.
Council for the Advancement of Standards in Higher Education. (2012). Student leadership programs. In CAS professional standards for higher education. http://www.cas.edu/
Creswell, J. W. & Poth, C. N. (2018). Qualitative inquiry and research design: Choosing among five approaches. SAGE Publishing. https://doi.org/10.1177/1524839915580941
Ewell, P. T., Paulson, K., & Kinzie, J. (2011). Down and in: Assessment practices at the program level. Urbana, IL: University of Illinois and Indiana University, National 192 Institute for Learning Outcomes Assessment. Retrieved from www.learningoutcomesasssessment.org
Glesne, C. (2011). Becoming qualitative researchers: An introduction. Addison Wesley Longman.
Goertzen, B. J. (2009). Assessment in academic based leadership education programs. Journal of Leadership Education, 8(1), 148–162. https://doi.org/10.12806/v8/i1/ib3
Guthrie, K. L., & Jenkins, D. M. (2018). The role of leadership educators: Transforming learning. Information Age Publishing.
Guthrie, K. L., Teig, T., & Hu, P. (2018). Academic leadership programs in the United States.
Leadership Learning Research Center, Florida State University, Tallahassee, FL.
Harvey, M., & Jenkins, D. M. (2014). Knowledge, praxis, and reflection: The three critical elements of effective leadership studies programs. Journal of Leadership Studies, 7(4), 76-85. https://doi.org/10.1002/jls.21314
Higher Education Research Institute. (1996). A social change model of leadership development guidebook (version III). Los Angeles, CA: Higher Education Research Institute.
ILA Guiding Standards Task Force. (2021, January 7). Guiding Standards for Leadership
Programs 2021 Concept Paper. Retrieved from https://theila.org/about/guiding-standards-task-force/
International Leadership Association. (2009). Guiding questions: Guidelines for leadership education programs. College Park, MD: Author. http://www.ila-net.org/communities/LC/GuidingQuestionsFinal.pdf
Jenkins, D. M. (2018). Comparing instructional and assessment strategy use in graduate- and undergraduate-level leadership studies: A global study. Journal of Leadership Education, 17(1), 73-92. https://doi.org/10.12806/v17/i1/r2
Jenkins, D. M., & Dugan, J. P. (2013). Context matters: An interdisciplinary studies interpretation of the national leadership education research agenda. Journal of Leadership Education, 12(3), 15-29. https://doi.org/10.12806/v12/i3/tf1
Jenkins, D. M., Hoover, K. F, Freed, S. A., & Satterwhite, R. (2012, October 24-27). The guiding questions: A bridge across the leadership studies program great divide [Workshop]. International Leadership Association 14th Annual Global Conference, Denver, CO.
Jenkins, D. M., & Owen, J. E. (2016). Who teaches leadership? A comparative analysis of faculty and student affairs leadership educators and implications for leadership learning. Journal of Leadership Education, 15(2), 98-113. https://doi.org/10.12806/v15/i2/r1
Kellerman, B. (2018a). Professionalizing leadership. Oxford University Press. https://doi.org/10.1093/oso/9780190695781.001.0001
Kellerman, B. (2018b, October 24-27). Standards [Keynote]. International Leadership Association 20th Anniversary Global Conference, West Palm Beach, FL.
Komives, S. R., Dugan, J. D., Owen, J. E., Slack, C., Wagner, W., & Associates. (2011). The handbook for student leadership development (2nd ed.). Jossey-Bass.
Kouzes, J. M., & Posner, B. Z. (2017). The leadership challenge: How to make extraordinary things happen in organizations (6th ed.). John Wiley & Sons.
Kuh, G. D., Jankowski, N. A., Ikenberry, S. O., & Kinzie, J. (2014). Knowing what students know and can do: The current state of student learning outcomes assessment in U.S. colleges and universities. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from www.learningoutcomesassessment.org
Merriam, S. B. (1998). Qualitative research and case study applications in education. Jossey-Bass.
Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. Sage. https://doi.org/10.1016/s1098-2140(99)80125-8
Nobbe, J., & Soria, K. M. (2016). Leadership assessment from an institutional approach. In D.
Roberts & K. J. Bailey (Eds), Assessing student leadership (pp. 93–105). Jossey-Bass. https://doi.org/10.1002/yd.20203
Owen, J. E. (2012). Examining the design and delivery of collegiate student leadership development programs: Findings from the Multi-Institutional Study of Leadership (MSL-IS), a national report. Washington, DC: Council for the Advancement of Standards in Higher Education.
Perruci, G., & McManus, R. M. (2013). The state of leadership studies. Journal of Leadership Studies, 6(3), 49-54. https://doi.org/10.1002/jls.21256
Ritch, S. (2013). Formalized program review: An evolution. Journal of Leadership Studies, 6(3), 61-66. https://doi.org/10.1002/jls.21258
Ritch, S., & Roberts, D. (2005, November). Standards and guidelines for leadership programs: What shall we do? Proceedings from the 7th Annual Conference of the International Leadership Association: Emergent Models of Global Leadership. Amsterdam, The Netherlands. [CD]. College Park, MD: International Leadership Association.
Ritch, S., Robinson, B., Riggio, R., Roberts, D., & Cherrey, C. (2004, November). Emerging accreditation issues: Toward professional standards for leadership programs? Proceedings from the 6th Annual Conference of the International Leadership Association: Improving Leadership around the World: Challenges, Ideas, Innovations. Washington, DC, USA. [CD]. College Park, MD: International Leadership Association.
Rocco, M. L., Pelletier, J. (2019). A conversation among student affairs leadership educators.
L. Priest & D. M. Jenkins (Eds.), Becoming and being a leadership educator (pp. 39-53). https://doi.org/10.1002/yd.20357
Saldana, J. (2013). The coding manual for qualitative researchers (2nd ed.). Sage Publications.
Sowcik, M., Lindsey, J. L., & Rosch, D. M. (2013). A collective effort to understand formalized
program review. Journal of Leadership Studies, 6(3), 67-72. https://doi.org/10.1002/jls.21259
Yin, R. K. (1994). Case study research: Design and methods (2nd ed.). Sage Publications. https://doi.org/10.1017/s0048840200023960
Appendix
Interview Protocol – Exploring the Process of Leadership Program Reviews in Higher Education
- Please tell me about the leadership program evaluations that you’ve done.
- What prompted the institution to initiate the program review you participated in?
- What are the expectations of the institution?
- What resources did you use?
- What was the scope of the evaluation?
- What did you find/recommend?
- What were you expected to produce or report for the institution before, during, and/or after the program review?
- What are some lessons learned? For other practitioners?
- What contextual factors influence/impact your approach to conducting a program review (e.g.):
- Co-curricular v curricular
- Liberal arts v. research institution
- How the program is structurally situated within the institution
- Is there any additional information you would like to share?