Introduction
Astin (1993) described two driving forces in America that have caused institutions of higher education to reconsider their assessment practices. First, national reports on higher education have been critical of assessment activities. Second, there is increasing interest among federal and state policymakers to improve accountability in higher education. Astin reported these trends 15 years ago, yet even today, American higher education experiences pressure for greater accountability. For instance, the Secretary of Education’s Commission on the Future of Higher Education described that they were disturbed by the inadequate quality of student learning and further explained, “employers reported repeatedly that many new graduates they hire are not prepared to work, lacking the critical thinking, writing and problem-solving skills needed in today’s workplaces” (U.S. Department of Education, 2006. p. 3). We as leadership educators must address these national pressures toward transparency, accountability, and increased student learning.
A recent study of undergraduate leadership programs examined the size, scope, and general nature of interdisciplinary, liberal arts-oriented undergraduate degrees in leadership (Brungardt, Greenleaf, Brungardt & Arensdorf, 2006). Unfortunately, the article did not examine the learning goals and objectives or assessment activities for the undergraduate degree programs which should serve as the driving force behind curriculum development.
In fact, there is a lack of published literature regarding the learning goals and objectives of undergraduate leadership education programs. Numerous articles published regarding the topic of assessment of academic-based leadership programs focus primarily at individual assignment or activity level (e.g., Pennington-Weeks, & Kelsey, 2007; Goertzen & Rackaway, 2007) or at the course level (e.g., Seemiller, 2006). Furthermore, few articles have examined assessment of student learning beyond that of self-reported measures (e.g., Brungardt & Crawford, 1996; Dugan & Komives, 2007). Likewise, Hannum and Martineau (2008) examined approaches to evaluating practitioner-based leadership development programs. Each of these references offers valuable insights for specific practices of assessment of leadership education. However, they fail to describe a comprehensive approach to assessment processes that explicitly link assessment practices to the unique context of program level goals in academically-based, undergraduate programs in leadership.
It is imperative that we as leadership educators “get it right” with regard to demonstrating the effectiveness of our respective academic-based leadership education programs. Not only are there socio-political pressures upon us to justify our effectiveness, but the recent work of DiPaolo (2008) challenges conventional thinking of the effectiveness of leadership education program. The author reported the development of students’ leadership capabilities was attributed more to their personal maturation and leadership experience and less to leadership education. If this is indeed the case, then we as leadership educators must accurately gauge student learning and make informed decisions aimed at enhancing our respective programs. While this paper is not intended to offer empirical evidence of student learning within academic leadership programs, it provides an overview of the current state of leadership education with regard to this challenge facing leadership educators.
A Brief History of Assessment
It is commonly asserted that the first National Conference on Assessment in Higher Education was held in fall 1985 as the first forum to intentionally discuss issues related to measurement of student learning (Ewell, 2002a). The conference, cosponsored by the National Institute of Education (NIE) and the American Association for Higher Education (AAHE), was created in large part by the report entitled Involvement in Learning (Study Group on the Conditions of Excellence in American Higher Education, 1984). This conference helped form assessment traditions around three primary centerpieces of student learning: (a) that high expectations be established for students, (b) that students be involved in active learning environments, and (c) that students be provided with prompt and useful feedback (Ewell, 2002a).
Symbolized by the U.S. Department of Education’s A Nation At Risk (1983) report, there were simultaneous voices outside of higher education calling for greater accountability in education (Ewell, 2002a). A by-product of the attention paid to K-12 education was a focus toward higher education. The mid 1980s witnessed renewed activism by governors and legislatures because postsecondary education was seen as driving engines for economic and workforce development.
By the 1990s, most states had established mandates for assessment in higher education. However, accrediting agencies had grown in influence and often replaced states as the primary stimulus for interest in institutional assessment (Ewell, 1993). More than 98% percent reported participating in institutional assessment programs in 1993 compared to 55% of institutions that had reported established institutional assessment activities in 1987 (American Council of Education’s (ACE) annual Campus Trends). Today, assessment has become part of the mainstream activities of higher education.
Mission-Driven Assessment: Not a Call for Homogeneity
The General Theory of Leadership project described the recent intellectual “journey” of leadership scholars to develop a unifying theory of leadership. Burns (2006) envisioned the quest for a general leadership theory as providing an intellectual frame to organize our thoughts on the topic. They concluded that a unifying theory of leadership is perhaps too impractical, if not impossible, because there is richness in the diversity of perspectives that individuals bring from multiple disciplines to the field. Ciulla (2006) asserted that we need multiple perspectives and must engage with individuals from other academic or cultural backgrounds for a more comprehensive understanding of leadership.
To some extent the aim of the General Leadership Theory project parallels isomorphism theory (DiMaggio & Powell, 1983) which describes the process of constraining forces that drive entities (e.g., organizations) to resemble one another. Compounded with current legislative and institutional pressures toward greater accountability, especially regarding effectiveness of student learning, we may encounter tremendous forces toward homogeneity among our academic leadership programs. Further, some may perceive that to engage in intentional conversations regarding accountability and assessment of student learning may serve as an imposing force toward “sameness.” However, there are ways by which we as leadership educators can collaborate while maintaining our unique identities.
It is not the intent of this paper to assert that academic leadership programs should be the same, nor should we, or our respective associations, create an accreditation body. Nonetheless, we can learn valuable lessons from other accrediting agencies. For example, the Association for the Advancement of Colleges and Schools of Business International (AACSB International) is an accrediting agency at the forefront of promoting effective assessment activities among business schools. The organization has overcome the intellectual barriers that might drive all business schools towards “sameness” via isomorphic processes. AACSB International suggested that assessment activities should be institutional specific and developed and implemented around the unique factors such as mission, student population, employer population and other circumstances (AACSB International, 2008).
Because of differences in mission, faculty expectations, student body composition, and other factors, schools vary greatly in how they express their learning goals. Definition of the learning goals is a key element in how the school defines itself. Thus, care should be exercised in establishing goals and in the regular review and revision of the learning goals and measurement of their accomplishment. (p. 61)
This is widely considered a “mission-driven” approach to developing and implementing comprehensive assessment plans. The mission of the academic leadership program of the local institution should inform the specific learning goals and objectives. Learning goals are key indicators of how the local academic program defines itself. Leadership programs may choose similar domains of student learning (e.g., leadership theory, effective communication, and critical thinking). However, each should develop learning goals and objectives and implement assessment activities that are unique to the local institution.
Therefore, we can and should maintain our missions, content, and other activities that make our academic leadership programs unique from one another. There are clearly many qualities that enhance distinctiveness of our academic leadership programs. For instance, there are strong programs that focus on non-profit leadership (e.g., Rockhurst University). Other well-developed programs take an organizational leadership lens toward the academic program (e.g., Fort Hays State University), while others approach leadership from a business perspective (e.g., Franklin University) (Brungardt, et al., 2006). Academic programs also derive curricula around a variety of leadership scholars such as James MacGregor Burns (e.g., University of Richmond), James M. Kouzes and Barry Z. Posner (e.g., Wright State University), and Robert Greenleaf (e.g., Chapman University) (Brungardt, et al., 2006). This diversity of perspectives enhances our overall understanding of leadership and adds value to our field. Nonetheless, this author’s contention is that we can and should learn from the assessment practices of other academic leadership programs to help ensure greater accountability and legitimacy as an academic discipline.
Properties of Evidence of Student Learning
The Guidelines for Leadership Education Programs Learning Community (n.d.) was a project initiated by members of the International Leadership Association (ILA). This volunteer project is an outcome of the Seattle 2002 ILA conference whereby leadership educators sought to identify guiding questions intended to assist leadership educators in shaping effective programs. The learning community provided useful questions regarding topics as formative and summative assessment and quantitative and qualitative data collection methods. However, leadership educators are limited if we only think about assessment in these terms. We must consider additional properties and sources of evidence of student learning if we hope to develop truly effective assessment programs.
An important property of evidence of student learning outcomes is the degree of “authenticity.” Not all assessment activities are considered authentic. This category is reserved only for tasks that “closely simulate or actually replicate challenges faced by adults or professionals” (Wiggins, 1998, p. 141). For instance, participating in a leadership initiative is viewed as more “authentic” than answering questions about this activity. Generally, authentic forms of evidence are more valued because they are closer to real leadership challenges (Ewell, 2002b).
Another critical property of evidence is termed direct or indirect “based on the distance from the cognitive construct of learning” (Ewell, 2002b, p. 21). Indirect measures typically reflect evidence about how students “feel” about the educational experience. Indirect measures capture consequences of learning such as related behaviors (e.g., job placement, civic participation, etc.) or testimony about learning (e.g., self-reports about learning gain or related behaviors as reported through questionnaires or interviews) (Ewell, 2002b). While this data may provide useful information about student attitudes, it may not fully capture evidence of actual knowledge or the application of that information. Direct assessment acquires evidences about student learning. Examples of this assessment approach include oral presentations, projects, demonstrations, case studies, simulations (Palomba & Banta, 1999), or “other forms of student work that demand observable deployment of the ability in question” (Ewell, 2002b, p. 21). Direct evidence of student learning is generally accorded greater credibility and is considered more “authentic” than indirect measures.
Sources of Evidence of Student Learning
The sources presented here are not intended to be an exhaustive list of all ways and means of assessment activities. Rather, the sources of evidence are offered as general categories with brief descriptions of relative strengths and limitations of each.
Direct Assessment Techniques
Standardized exams commonly rely upon forced-choice examinations (Ewell, 2002b) that primarily measure the cognitive domain of learning. Several industries related to leadership education and leadership development have certification exams. For instance, the Society of Human Resource Management (SHRM) offers the Professional in Human Resources (PHR) and Senior Profession in Human Resources (SPHR) certification exams that cover knowledge in the functional areas of human resources. Widely used and highly respected standardized exams have been developed to measure abilities related to leadership development skills. For example, the Collegiate Learning Assessment (CLA) assesses dimensions as critical thinking and analytical writing. The CLA uses “real life” activities that challenge participants to review and evaluate arguments and measures the ability to interpret and analyze as well as synthesize information (Council for Aid to Education, 2008).
Student attainment or pass rates on standardized exams provide valuable success measures for academic leadership programs, since they permit benchmark comparisons across other leadership programs. However, standardized exams are often expensive and are only as useful as their alignment with the expressed learning goals and objectives of the particular academic program.
Local comprehensive exams that are developed locally can provide several advantages to standardized exams. Perhaps the most significant advantage is the flexibility for the exam to be developed to measure that which matters most to the local leadership development program. The local program has greatest control to design the exam in alignment with the explicit learning goals and objectives of the leadership program. Further, local comprehensive exams can offer timely and relevant feedback to the institution as well as being less-costly, compared to Standardized Exams. However, several drawbacks remain for this assessment approach. Local exams require extensive effort to develop and administer, and they do not permit the benchmark comparisons across other academic programs.
A review of published literature yielded no studies that examine locally developed comprehensive exams; however, it is still possible, if not likely, that institutions employ this technique. For instance, Fort Hays State University recently developed a 25-item multiple choice exam intended to assess student knowledge of leadership theory administered to students at entry into and exit from the undergraduate leadership program. However, the test is limited because it exclusively focuses on leadership theory and not on other important content areas of leadership education. Further, “conversation” among and between leadership programs can certainly enhance our understanding of “best practices” regarding this source of student learning.
Simulations (tasks and demonstrations) can challenge students to demonstrate a skill when it is not feasible to use a real-world setting (Palomba & Banta, 1999) and they can provide valuable evidence of student attainment that is both direct and authentic (Ewell, 2002b). Simulations often require even more extensive effort to develop and administer than local comprehensive exams in that not only must an evaluation tool (e.g., scoring guides or rubrics) be designed and deployed, but also the activity must be constructed that closely approximates a real-world setting.
Academic leadership programs may take advantage of pre-existing simulations that are widely disseminated in popular press publications and in refereed journals. For example, Rackaway and Goertzen (2008) published a demonstration whereby students engaged in an in-class debate regarding changes in Social Security policy. Student performance was evaluated by instructors with a grading rubric (See Figure 1). Pre-existing simulations such as this are useful tools for students to demonstrate proficiency. However, scoring guides and perhaps even the activities themselves must be adapted to be in alignment with the specific learning goals and objectives of the local academic leadership program.
Figure 1 Sample Rubric
Leadership: Theory to Practice
Novice |
Apprentice |
Proficient |
Distinguished |
Leadership needed is unclear and too long; Does not express ideals or values, commitments or aspirations; Essentially no link between leadership theory and the problem and vision for current situation. |
Leadership needed is either unclear or too long; Expresses ideals or values in general terms but does not express specific commitments or aspirations that manifest a vision; Limited connection between leadership theory and the problem and vision for current situation. |
Leadership needed is reasonably clear and not obviously too long; Expresses ideals or values and aspirations that suggest a vision; Some connection between leadership theory and the problem and vision for the current situation. |
Leadership needed defines clarity of purpose; Explicitly states vision and values that are realistic and achievable; Clearly defined connection between leadership theory and the problem and vision for the current situation. |
Writing Quality
Novice |
Apprentice |
Proficient |
Distinguished |
Considerable difficulty |
Difficulty expressing |
Good writing style with |
Strong style with clear |
expressing ideas or |
ideas, feelings or |
solid ability to convey |
ability to express |
descriptions clearly. |
descriptions. Needs to |
meaning. Few |
thoughts and point of |
Many grammatical, |
work on grammar, |
grammar, syntax and |
view. Excellent |
syntactical, and |
spelling, etc. |
spelling errors. |
grammar, syntax, |
spelling errors. |
spelling, etc. |
Student work is another category of assessment measures that includes a vast range of evidence that represents work products of students such as “tests, essays, posters, oral reports, book reviews, terms papers” (Palomba & Banta, 1999, p. 161). These assessment techniques are embedded into normal classroom activities and require faculty members to submit both grades to students and various kinds of information to an assessment committee or individual in charge of collecting assessment data for centralized analysis. It has been this author’s experience that faculty are most open to this form of data collection process in that it does not require faculty members to substantially change what they already do in the classroom. Rather, it only requires a modification on how student work is evaluated. Useful assessment data can be collected by either a Primary Trait Analysis or other grading rubric measures. Similar to a grading rubric, a Primary Trait Analysis is typically comprised of several factors (or traits) to be evaluated (See Figure 2). Each trait is comprised of a three or five-point numeric scale accompanied with an explicit statement that describes student performance at that level (Palomba & Banta, 1999).
Figure 2
Sample Primary Trait Analysis for Presentation
Criteria |
Rating Scale |
Score |
|
Voice & Pacing |
3 |
Poised, clear articulation, proper volume |
|
2 |
Not as polished, uneven rate |
||
1 |
Inaudible or too loud, rate too slow/fast |
||
Eye Contact |
3 |
Maintains eye contact, seldom looks at notes |
|
2 |
Occasionally uses eye contact, frequently looks at notes |
||
1 |
Reads all of report with little/no eye contact |
||
Organization |
3 |
Presents information in logical, interesting sequence |
|
2 |
Audience has some difficulty following presentation; jumps |
||
around |
|||
1 |
Audience cannot understand; little or no sequence of |
||
information |
An important advantage of the direct assessment technique is that assignments and respective evaluation devices can be intentionally linked to the explicit learning goals and objectives of the academic program. Additionally, faculty making use of primary trait analysis or other grading rubric devices can provide students with timely and relevant feedback. A potential drawback for this type of assessment approach is that sometimes faculty are uncomfortable, if not unwilling, to share information as to what they are doing in the classroom out of fear that it may be “held against them” in a review (merit or tenure) process.
Never should the processes involved in measuring student achievement be used to also evaluate individual faculty performance. Another potential drawback from this approach to measure student learning is the difficulty involved in getting information to add up to a meaningful whole (Palomba & Banta, 1999). For instance, in the absence of a capstone course or other culminating experience it can be difficult to see whether students are integrating what they are learning.
A portfolio is a performance assessment collected over time comprised of a compilation of student work that involves “gathering a body of evidence of one’s learning and competence” (Lyons, 1998, p. 19). Portfolios provide reflective statements about progress of student achievement in regard to established learning goals and objectives (Palomba & Banta, 1999). Portfolios are appealing in large part because they can contain rich and diverse sources of information collected over a long period of time. Since portfolios include longitudinal data, linked across one or more courses, they can be used to assess student improvement and increase in overall quality. There are a variety of strategies for including material into the portfolio. For instance, students can be asked to contrast their best work with weaker work, or students can include examples of how their thinking about a subject has changed over time. Essentially, there is no limit to the kinds of items that can be included in portfolios. Nonetheless, the information contained in portfolios ought to include representative examples of student learning.
Depending upon how the portfolios are evaluated with determine whether they are actually direct or indirect measures of student learning. Olsen (2009) described the use of portfolios in leadership education whereby students provided self-report data on the level of learning within the academic program. This would be an indirect measure of student learning. If, however, faculty or other qualified reviewers provided a holistic evaluation of student attainment throughout all of the representative student work then this would be a direct measure of learning.
Perhaps the most significant advantage of portfolios compared to other assessment techniques is that it contains longitudinal information and opportunities for student reflection. Courts and McInerey (1993) assert that portfolios also challenge students to take responsibility for their own learning and give students a voice in assessment. Another important advantage in using portfolios is that they can be designed to directly measure what matters most in that representative student work can be linked directly to learning goals and objectives of the academic program. Additionally, documents or other representative work of student performance can be embedded into the normal academic activities of coursework.
Using portfolios to measure student achievement also has distinctive disadvantages. Portfolios require a substantial amount of time from both students and faculty for planning and carrying out the portfolio process (Palomba & Banta, 1998). In addition to the substantial involvement of faculty required during the planning process to determine the information to be included, the evaluation and administration processes will entail substantial faculty involvement also.
Indirect Assessment Techniques
Self-reports require testimony of learning from students themselves. Data collected from this form of assessment practice may be qualitative or quantitative. Most quantitative data may be gathered from surveys or questionnaires which ask students or graduates to rate their current level of knowledge or skill regarding a particular learning outcome. Individual or focus group interviews may yield ‘rich and thick’ information about student experiences. Self-report techniques are perhaps the only means of obtaining information regarding non-cognitive outcomes as attitudes, beliefs, or dispositions (Ewell, 2002b). Survey forms of self-report information are especially popular because of the ease to which a wide variety of information can be collected whereby respondents can be asked about their attitudes and opinions. The National Survey of Student Engagement (NSSE) is a useful self-report survey assessing student attitudes of five dimensions: Level of Academic Challenge, Active and Collaborative Learning, Student-Faculty Interaction, Enriching Educational Experiences, and Supportive Campus Environment.
The self-report method is also useful when considering the “value-added” to the educational experience. This approach asks students/graduates to consider how much they have grown as a result of the academic program, which can yield useful information. Perhaps the most well-known “value-added” leadership development assessment instrument is the Multi-Institutional Study of Leadership (MSL). Based on upon the Social Change of Leadership Development (HERI, 1996), the recent report by the National Clearinghouse of Leadership Programs presented data from over 50,000 student responses (Dugan & Komives, 2007). Data from this self-report measure indicated improvement of their leadership abilities along each dimension.
Both NSSE and the MSL are quantitative examples of self-report assessment techniques while other leadership programs integrate qualitative approaches. Black, Metzler and Waldrum (2006) described the use of focus groups to assess the impact of a statewide leadership development program. While the study was conducted among alumni of practitioner-based leadership development programs there are valuable lessons to be learned for academic-based leadership programs as well. This qualitative approach has distinct advantages compared to other quantitative approaches as it yields “thick and rich” information about student learning as well as the potential gaps between program outcomes and student attainment. Further, this approach allows greater flexibility for program evaluators as they can probe beyond the initial responses offered by study participants.
There are a number of methods of collecting self-report data. The self-report approach to assessment is popular because of the ease and efficiency in collecting the information particularly when compared to the expense of standardized exams or the labor-intensiveness of “more authentic forms of evidence that requires human grading” (Ewell, 2002b, p.26).
Behavioral outcomes provide useful indirect information of student learning. This technique may be particularly beneficial for undergraduate leadership programs as many seek to foster civic involvement and participation. Success measures such as frequency of volunteerism, voting, or other civic behaviors can provide meaningful information of program effectiveness. In many cases, information is collected by surveys of graduates or former students (Ewell, 2002b). But, sometimes information can be maintained by institutional records such as graduation rates or enrollment in post-undergraduate education. This technique possesses similar efficiencies (e.g., cost and time) to that of self-report approaches to student learning. However, this data source of has a distinct disadvantage because behavioral outcomes often serve only as a proxy measure of actual student attainment.
Robert Putnam’s (2000) report regarding the decline of social capital in the United States of America made effective use of behavioral outcome measures representing the change in civic engagement. He integrated behavioral outcomes such as “level of volunteering,” “serving as officer or on a committee of a local club,” and “level of philanthropic generosity.” While this study reported general trends in the United States, leadership educators, particularly in programs with a civic engagement focus, may find some of these behavior outcomes as meaningful indicators of student learning.
Conclusions
As leadership educators in undergraduate academic leadership programs, we must participate in this process and engage in mutual conversations of ‘best practices’ for these activities. Brungardt, et al. (2006) accurately asserted “leadership educators must become much more intentional in our collaboration. We are so busy being ‘lone rangers’ in the field that we fail to practice what we preach. We, like so many others in organizational life, talk the talk of collaboration, but fail to walk it” (p. 22). We are still failing to effectively engage in many of these conversations and collaboration opportunities, especially on the topic of assessment of our academic leadership programs.
Again, this is not a call for us to create an accrediting agency to “certify” our leadership programs. Rather, we, as individual faculty and administrators of local academic leadership programs, along with our respective member associations (e.g., International Leadership Association and Association of Leadership Educators), must intentionally engage in conversations regarding sound student learning outcomes and measurement of student attainment. Further, we can clearly learn from other disciplines that are already “doing it well.” Further, we must collaborate to ensure that learning objectives are being met.
In this uncertain economic environment, citizens, legislators, and other policy and budgetary authorizes are asking tough questions regarding accountability in higher education. Especially since leadership is a comparatively young academic discipline, it is critically important that we meet these serious questions with serious answers.
References
AACSB International. (2008). Eligibility Procedures and Accreditation Standards for Business Accreditation. Retrieved January 20, 2009, from http://www.aacsb.edu/accreditation/process/documents/AACSB_STAND ARDS_Revised_Jan08.pdf
Astin, A. W. (1993). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. Phoenix, AZ: Oryx Press.
Black, A. M., Metzler, D. P., & Waldrum, J. (2006). That program really helped me: Using focus group research to measure the outcomes of two statewide leadership programs. Journal of Leadership Education, 5(3), 53-65.
Brungardt, C. L., & Crawford, C. B. (1996). A comprehensive approach to assessing leadership students and programs. The Journal of Leadership Studies, 3(1), 37-48.
Brungardt, C. L., Greenleaf, J., Brungardt, C. J., & Arensdorf, J. (2006). Majoring in leadership: A review of undergraduate leadership programs. Journal of Leadership Education, 5(1), 4-25.
Burns, J. M. (2006). Afterword. In G. R. Goethals and G. L. J. Sorenson (Eds.), The Quest for a General Theory of Leadership. Northampton, MA: Edward Elgar Publishing.
Ciulla, J. B. (2006). What we learned along the way: A commentary. In G. R. Goethals and G. L. J. Sorenson (Eds.), The Quest for a General Theory of Leadership. Northampton, MA: Edward Elgar Publishing.
Council for Aid to Education (2008). Collegiate Learning Assessment (CLA) brochure. Retrieved January 20, 2009 from http://www.cae.org/content/pdf/CLABrochure2008.pdf
Courts, P. L., & McInerney, K. H. (1993). Assessment in higher education: Politics, pedagogy, and portfolios. New York: Praeger.
DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 2(48), 147-160.
DiPaolo, D. G. (2008). Leadership education at American universities: A longitudinal study of six cases. Lewiston, NY: Edwin Mellen Press.
Dugan, J. P., & Komives, S. R. (2007). Developing leadership capacity in college students: Findings from a national study. A Report from the Multi- Institutional Study of Leadership. College Park, MD: National Clearinghouse for Leadership Programs.
Ewell, P. (1993). The role of states and accreditors in shaping assessment practices. In Trudy W. Banta (Ed.), Making a Difference: Outcomes of a Decade of Assessment in Higher Education. San Francisco, CA: Jossey-Bass.
Ewell, P. (2002a). An emerging scholarship: A brief history of assessment. In Trudy W. Banta and Associates (Eds.), Building a scholarship of assessment (pp. 3-25). San Francisco, CA: Jossey-Bass.
Ewell, P. (2002b). Applying learning outcomes concepts to higher education: An overview. National Center for Higher Education Management Systems (NCHEMS).
Guidelines for Leadership Education Programs Learning Community (n.d.). Retrieved April 15, 2009, from ILA: http://ilaguidelineslc.pbwiki.com/FrontPage
Hannum, K. M., & Martineau, J. W. (2008). Evaluating the impact of leadership development. San Francisco, CA: Pfeiffer.
Higher Education Research Institute [HERI]. (1996). A social change model of leadership development: Guidebook version III. College Park, MD: National Clearinghouse for Leadership Programs.
Lyons, N. (1998). Constructing narratives for understanding: Using portfolio interview to scaffold teacher reflection. In N. Lyons (Ed.), With Portfolio in Hand: Validating the New Teacher Professional. New York: Teachers College Press.
Olsen, P. E. (2009). The use of portfolios in leadership education. Journal of Leadership Education, 7(3), 10-17.
Palomba, C. A., & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco, CA: Jossey-Bass Publishers.
Pennington-Weeks, P., & Kelsey, K. D. (2007). Student project teams: Understanding team process through an examination of leadership practices and team culture. Journal of Leadership Education, 6(1), 209- 225.
Putnam, R. D. (2000). Bowling alone: The collapse and revival of American community. New York: Simon & Schuster.
Rackaway, C., & Goertzen, B. J. (2008). Debating the future: A Social Security political leadership simulation. Journal of Political Science Education, 4, 330-340.
Seemiller, C. (2006). Impacting social change through service learning in an introductory leadership course. Journal of Leadership Education, 5(2), 41- 59.
Study Group on the Conditions of Excellence in Higher Education (1984). Involvement in Learning: Realizing the Potential of Higher Education. Washington D.C. National Institute of Education.
U.S. Department of Education (1983). A nation at risk: The imperative for educational reform. Washington, DC: Author.
U.S. Department of Education (2006). A national dialogue: The Secretary of Education’s commission on the future of higher education. Washington, DC: Author.
Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco, CA: Jossey-Bass.