University-wide Assessment

1999 Annual Report

University of Nebraska–Lincoln
September 2000

Hard copies of this report have been sent to the deans of all UNL colleges and to the Teaching and Learning Center. Questions concerning this report or university assessment activities should be directed to the Director of Institutional Assessment.

1999 Annual Report PDF

Table of Contents

Executive Summary


University-wide Assessment Committee Activities

Activity 1: Integrate outcomes assessment into UNL's program planning and budgeting process

A. Academic Program Review
B. External Accreditation
C. Mid-Cycle Assessment Reviews

Activity 2: Facilitate the annual reporting of college, departmental, and program assessment

Activity 3: Initiate and monitor assessment of graduate student learning outcomes

Activity 4: Monitor assessment of the Comprehensive Education Program (CEP)

A. Institution Wide Surveys
B. Peer Review of Teaching and Assessment Project

Activity 5: Communicate useful assessment information to university faculty and administration

A. Website
B. University-Wide Assessment Steering Committee
C. Mid-Cycle Assessment Reviews
D. Teaching and Learning Center

Activity 6: Plan for the focus visit by North Central Association (NCA)

Conclusions and Recommendations


Appendix A - Common Misperceptions about the Outcomes Assessment Process
Appendix B - Copy of Dean's letter requesting Annual Assessment Reports
Appendix C - Effects of the assessment of undergraduate outcomes
Appendix D - Effects of the assessment of graduate outcomes
Appendix E - Annual CEP Assessment report
Appendix F - Reaccreditation Letter from North Central Association


The University-wide Assessment Steering Committee was created to "facilitate feedback and coordination among and between the various aspects of assessment". It has been instrumental in the evolution of assessment on our campus. As each unit has carried out its responsibility to assess the student learning outcomes that the unit values, the University-wide Assessment Committee continued its responsibility to serve UNL in an advisory capacity.

During the 1999-2000 academic year, many units have made tremendous strides in identifying what learning should be occurring, the mechanisms for measuring that learning, and using evidence collected for programmatic improvement. However, it is still true that progress differs across different units. Some units have found their assessment evidence useful for reflection about the program while in others the process continues to evolve. This differential progress is acceptable and logical given that outcomes assessment is viewed as an ongoing rather than an episodic process that examines teaching and learning over time in the spirit of continuos improvement.

During the 1999-2000 academic year, the committee, with the assistance of the University-wide Assessment Coordinator, has continued to oversee or conduct the following activities:

Activity 1: Integrate outcomes assessment into UNL's program planning/budgeting process.

Activity 2: Facilitate the annual reporting of college, departmental, and program assessment activities.

Activity 3: Initiate and monitor, where appropriate, the assessment of graduate student learning outcomes.

Activity 4: Track assessment of the Comprehensive Education Program.

Activity 5: Communicate useful assessment information to university faculty and administration.

Activity 6: Plan for a focus visit by North Central Association (NCA) on the progress of outcomes assessment at UNL.

The purpose of this University-wide Annual Assessment report is to summarize the results and conclusions of activities conducted by the individual units and overseen by the University-Wide Assessment Committee. This summary and reflection will serve as a focus for determining the future direction of outcomes assessment at UNL.

It is necessary to clarify the currency of information contained in this report because of the cycle used for colleges and departments to report on their annual assessment activities. The effects of outcomes assessment activities discussed under Activity 2 (Annual Reporting) and Activity 4 (Graduate Assessment) and reported in Appendix C (Effects of the assessment of undergraduate outcomes) and Appendix D (Effects of the assessment of graduate outcomes) are based on activities conducted by individual units during the 1998-99 academic year and reported during the Fall of 1999. Remaining information discussed in this report is based on activities overseen by the University-Wide Assessment Committee during the 1999-2000 academic year.

University-wide Assessment Committee Activities

Activity 1: Integrate outcomes assessment into UNL's program planning and budgeting process

Academic Program Reviews (APR), External Accreditation's, and Mid-Cycle Assessment Reviews are three processes used to integrate assessment into UNL's program planning and budgeting. The functioning of each process during the 1999-2000 academic year is discussed below.

A. Academic Program Reviews

Academic Program Reviews (APR) are intended to insure the quality of academic programming, both instructional and non-instructional.1 The review team is to consider the environment at the University of Nebraska - Lincoln in addition to evaluating the program quality. 2 In fall 1996 academic program review procedures were changed to incorporate assessment activities; however, the first self-study reports to include this information were filed in spring 1997. This delay allowed programs to implement assessment plans designed during the 1995-96 academic year. The extent to which programs and review teams incorporate outcomes assessment activities and evidence into their reports is surveyed each year. This year self-study documents and team reports for five Academic Program Reviews conducted in the 1999-2000 academic year indicate a marked improvement in the extent and quality to which assessment information is included in the process.3 The following is a description of how assessment activities and evidence were incorporated into self-study documents and review team reports.

Self-Study Documents

A significant step towards fully integrating assessment information into the APR process occurred in 1999-2000. Self-study documents submitted in 1999-2000 focused more on student learning issues when discussing the strengths and areas in need of improvement. In addition, self-study documents contained an increased awareness and recognition of the role of outcomes assessment in discussing programmatic improvement. Self-study documents from 1997-98 and 1998-99 typically included only a description of the program's learning objectives, assessment methods and procedures, and sometimes a commentary on the implementation of their assessment activities and the perceived usefulness of these efforts. Frequently this information was included in an appendix instead of incorporating it into the self-study document.

The above conclusion that outcomes assessment activities and evidence is beginning to play a greater role in the APR process is supported by the following themes found in this year's documents:

  • Summaries of assessment evidence either in text or table form were frequently included in the self-study document.
  • An increase in the use of assessment evidence (particularly from senior and alumni surveys) in the identification of programmatic strengths and areas of improvement.
  • More discussions focused on graduate assessment activities and evidence.
  • All reports identified programmatic and curricular issues related to student learning.
  • Outcomes assessment activities and evidence accompanied discussions about the teaching mission of the program.
In addition to these themes, two of the five self-studies incorporated assessment activities and evidence even further:
  • Assessment evidence was clearly linked to the identification of learning issues and solutions.
  • Discussed activities conducted or to be conducted that would follow-up on programmatic changes intended to improved student learning.
  • Included a detailed description of the timeline for assessment, audiences for assessment, structure of the assessment process (i.e. methods, administration, responsibility, etc.), and how results are analyzed and used.

These two self-studies illustrated how assessment evidence can be meaningfully incorporated into the APR process. Their approaches highlight several ways that the use of assessment evidence in self-studies can be strengthened.

  • Outcomes assessment evidence should not only be presented but also reflected upon. A discussion of possible solutions or what the results may mean should accompany the presentation. Assessment evidence should be used to determine how to improve the department's programs in the next five years.
  • Multiple sources of assessment evidence should be used to reach conclusions about programmatic strengths and areas of improvement. The assessment evidence referenced in self-study documents mainly came from indirect measures (i.e. student and alumni surveys). In the future, programs should determine how data from indirect and direct measurements can be combined for stronger evidence of program effectiveness or need for improvement.

The use of assessment information in the programmatic improvement process is occurring. In fact, two self-study reports highlighted the potential contribution outcomes assessment evidence can make when it is used to support conclusions about the program.

Review Team Reports

In addition to the increased integration and use of assessment information in self-study documents, review team reports also exhibited an increased recognition of outcomes assessment. An examination of review team reports indicated the following recurring themes:

  • Teams often suggested solutions addressing student learning issues. Sometimes these issues were identified in the self-study document and sometimes they were not.
  • Teams sometimes recommended alternative strategies for collecting assessment information and at least commented on the appropriateness of the program's learning objectives and assessment methods.
  • In several instances, review teams used results provided in the self-study to back-up a suggestion made in their report.

Despite an increased awareness and recognition of assessment activities and evidence by review teams, two review team reports hardly addressed student learning. In these two circumstances, the review team's primary interest was in discussing the need for additional faculty lines and/or the need for a stronger research agenda. These issues overshadowed other issues (i.e. student learning) and perhaps in these circumstances rightly so. As was discussed in last year's annual report, assessment of student learning outcomes is only one factor to consider in evaluating program quality. Other issues affecting program quality may be paramount in some program reviews. When this occurs, a review team may offer no comment or suggestion in regards to student learning or the program's outcomes assessment activities and evidence. Therefore, it is important to continue to recognize that the objectives of the program review and the objectives of outcomes assessment are not always coterminous.

B. External Accreditations

In 1999-2000, one college at UNL underwent a review by their accrediting commission. This college's accrediting commission had recently adopted criteria for programs to identify learning outcomes, methods of assessment, and a process that leads to continuos quality improvement. Comments in the accreditation site visit report frequently commended the college's progress and provided suggestions for improving their process and methods. These responses supported the accrediting commission's guidelines that were outlined to help make assessment evidence useful in the improvement to student learning. This feedback indicates that professional accrediting agencies can play a significant role in the progress of outcomes assessment at UNL.

C. Mid-Cycle Assessment Reviews

The 1999-2000 academic year marks the second year in which mid-cycle assessment reviews were conducted. 5 In addition to the four reviews completed in 1998-99, seven reviews were conducted in 1999-2000. Mid-cycle reviews are scheduled to occur two to three years before the academic program review (APR). The purpose of the review is two-fold:

  1. Provides a forum where a focused discussion about assessment activities allows faculty to reflect on their assessment efforts and obtain suggestions or ideas for improving their assessment plans so they can obtain meaningful and useful information about student learning.
  2. Provides a mechanism for: 1) clarifying the expectations for assessment efforts, 2) sharing successful assessment strategies used by other colleges/programs at UNL or other institutions, and 3) obtaining a broad sense of where the institution stands in their assessment efforts.

The following discussion provides a summary of the extent to which these two purposes were served in the 1999-2000 reviews.

Purpose 1: Reflection and Suggestions

One goal of the review is to assist a department in determining the usefulness of their assessment process in improving student learning. Primarily two criteria are used when determining if an assessment process is useful. First, how well does the assessment process provide insight into learning issues that interest the department? Second, does the amount of faculty time invested in conducting assessment result in the evidence needed to improve the educational process? Focusing on these criteria in mid-cycle reviews conducted in 1999-2000 resulted in the following lessons:

  • Alternative strategies for directly measuring student learning were explored once it was determined that the current method was requiring an immoderate amount of faculty time and was not providing useful evidence about student learning. The alternative strategies proposed would make better use of existing courses and course products rather than administering an additional process beyond students' coursework. The benefit in using these alternative strategies is that they would minimize the demand on faculty time and provide useful information.
  • The issues underlying the redesign of a major served as a focus for also redesigning the assessment process for that major. Because the new major attempted to improve the "progressivity" of the curriculum, obtaining assessment information from students throughout their program (in addition to information collected at the conclusion of their program) was considered. This additional information would assess student learning developmentally as well as provide a mechanism for monitoring distinctions outlined for different course levels.
  • It was suggested that indicators of student learning representing various inputs, processes, and outputs in the educational process be linked to improve their usefulness. This would involve determining which inputs effected the processes that effected outputs. This linking could assist in determining how the modification in one indicator led to changes or improvements in other indicators. In addition, linking indicators could assist in determining what changes might lead to improvements in student learning and if additional information was needed to obtain more compelling and useful evidence about student learning.
  • Because only a small number of majors graduated from a program each year, it was difficult to make accurate conclusions about student learning from the assessment evidence. It was suggested that assessment evidence be collected every year but analyzed only every few years. This scheduling provided more stable evidence on which to base conclusions about student learning. In addition, this change made time available for addressing the departments other contributions to student learning (i.e. graduate programs, service courses, etc.).
  • To make the assessment process more manageable, a framework was created to determine which learning objectives were already being assessed by products from existing courses and internships. This exercise insured that add-on assessment methods would not be created when methods for assessing objectives already exist.
  • The appendix of the mid-cycle report was modified to enhance its usefulness. The report's appendix was expanded to serve as a toolbox addressing the specific assessment activities and issues of the department. This toolbox includes practical how-to resources (i.e. book excerpts, articles, and checklists), examples from other departments, colleges, or institutions, and frameworks for organizing and structuring assessment activities. Although this appendix was customized for each department, several resources proved to be applicable to almost all circumstances (i.e. example senior surveys, suggestions for writing objectives, etc.).

Purpose 2: Communication

The mid-cycle review has been an effective forum for clarifying expectations and discussing the benefits of outcomes assessment. The Coordinator has been told on more than one occasion that the meetings provide faculty a better sense of how and why assessment should be conducted and reduced their anxiety about the process. Specifically, the mid-cycle review has provided the opportunity to address common misperceptions. Appendix A contains misperceptions typically encountered at any institution engaging in a process of assessing outcomes.

In addition to clarifying expectations and correcting misperceptions, the coordinator specifically and continuously emphasizes that the purpose of the mid-cycle assessment review is to be helpful and constructive to departmental assessment efforts. It does so by encouraging and facilitating assessment activities that a department will find beneficial to their program's unique goals. The coordinator also emphasizes that the purpose of the review is not to focus on a department's assessment results or determine how well the department's students are meeting learning objectives but to focus on the department's assessment processes. This focus assists in determining how well the department's assessment activities are providing information on student learning outcomes that the program can use to improve their educational program.

The review has also been used to share assessment strategies used by other programs and to encourage and facilitate the use of existing data in the assessment of learning outcomes. In this year's mid-cycle reviews, the sharing of the assessment instruments and strategies of other programs were frequently incorporated into the mid-cycle review. For example, senior and alumni surveys used in other departments' could be shared among very different disciplines but still provide useful information about methods for assessing student learning (i.e. survey item format or content). In addition, a method used in the Architecture department that samples student work of varying performance levels could also be used by a variety of departments. This strategy could be universally applied because the technique involved sampling the best, worst, and average performance for comparison from one year to the next.

The reviews conducted in 1999-2000 revealed several things about the course of outcomes assessment at UNL.

  1. First, there are varying levels of engagement and acceptance of the outcomes assessment process. Some faculty members are enthusiastic about the potential for the outcomes assessment process to improve their programs and their students learning. Others find the process a burdensome exercise that provides very little useful insight. This disparity can be attributed in part to the different missions of different departments. Those programs whose mission focus heavily on the success of their teaching view outcomes assessment more positively while those whose mission focus more on their research agenda view it less positively.
  2. Although every department has at least one individual in charge of assessment and often a group of individuals interested and committed to the assessment process, the success of the process is confined by whether or not all faculty in the department participate. It is important that every faculty member that contributes to a program understands and discusses the learning outcomes being achieved or not by graduates of the program.
  3. Most assessment plans have an appropriate list of objectives that are measurable and focus on learning goals rather than teaching goals. However, many find methods used to measure these objectives are time-consuming and/or yield very little useful information. At this time, the most useful measures across the board appear to be senior exit surveys/interviews. These surveys/interviews are also the most commonly used assessment method.
  4. Mid-cycle reviews have also revealed that the comprehensiveness of assessment plans and efforts (although particularly useful and relevant when outcomes assessment was initiated at UNL) has led to very general information. This general evidence is not always useful for addressing the specific issues facing the department. The review has been used to encourage departments to identify learning issues that faculty in the department are interested in then determine how their assessment process can provide useful information about those specific issues.


It is too early to determine whether the suggestions provided to departments in the mid-cycle review have assisted them in improving their assessment plans. This is because those programs that have undergone a mid-cycle review have yet to conduct an APR since their review. However, annual reports from departments who participated in a mid-cycle review in 1998-99 indicated that some of the suggestions made in the review were adopted. If the review has assisted in that improvement, the program's Academic Program Review should use assessment evidence to document their program strengths while simultaneously gaining information about where and how to concentrate limited resources effectively.

Activity 2: Facilitate the annual reporting of college, departmental, and program assessment

In the fall of 1997, an annual reporting process was initiated to institute an on-going process for the colleges to communicate progress on their assessment of learning outcomes. Deans were asked to either send copies of each department/program's annual report or to summarize the annual reports and retain file copies of the detailed reports for reference by the University-wide Assessment Coordinator in preparing for mid-cycle reviews or academic program reviews. The letter requesting the 1999 annual reports on 1998-99 assessment activities can be found in Appendix B. Because these reports are called for each fall, the college and its departments/programs are asked to report on the assessment activities they conducted in the previous academic year. Therefore, this summary represents the results of assessment activities completed by the colleges, departments, and programs during the 1998-99 academic year.

Programmatic improvement is the most important criterion for determining the progress of outcomes assessment at UNL because it is the primary reason that accrediting agencies (regional and professional) are requiring institutions to assess learning outcomes. The college's annual reports indicate that assessment is informing programmatic improvement. Appendix C contains a detailed narrative highlighting the variety of ways outcomes assessment has informed programmatic improvement.

Overall, the colleges and their programs continue to progress in their outcomes assessment efforts. This progress is a critical accomplishment given that quality outcomes assessment is achieved by a continual reflection on and corresponding refinement of assessment over time. Conclusions about the progress of assessment at UNL takes into account the specific approach to assessment taken by each college. This is important because it has been found that differing approaches may better represent the structure of a college's curriculum. For instance, the College of Business Administration has chosen to initially focus its undergraduate assessment on that common core because all majors are required to take a common core curriculum within the college and 50% of the students chose a general major in which the foundation is that common core. In contrast, the College of Arts and Sciences does not require a common curriculum core of its majors and, therefore, all departments were asked to focus on their undergraduate majors. These two examples illustrate that although a review of the annual reports may seem to suggest a wide disparity in departmental activity, it makes sense that different colleges use different strategies for approaching the assessment of student outcomes. With this clarification, it can then be surmised that using a prescriptive criterion for assessing progress of outcomes assessment in each college would not be valid. Instead, a better indicator is whether the activities in each college continue to improve or build upon their assessment each year. The following notes characterize this progress:

Note 1: There appears to be more follow-up on programmatic changes to determine if they had the desired impact.

Note 2: Assessment plans are being modified or redesigned because they have been implemented long enough to indicate that the evidence being collected is no longer insightful or is not answering questions of interest.

Note 3: There is a marked increase in the conversations and considerations of programmatic change spurred by assessment evidence.

Note 4: At this time, indirect measures (i.e. senior and alumni surveys) have been more influential on considerations and actions for improvement than direct measures. Since indirect measures are easier to design, administer, and analyze it is logical that they have provided more useful information. However, information provided by indirect measures is limited in that it only represents perceptions of learning rather than actual demonstration of learning.

Note 5: Interdisciplinary programs, in particular, are benefiting from conversations about student learning based on their assessment of student learning outcomes. This result speaks positively about the contribution outcomes assessment can make in the improvement of programs with faculty who normally do not occupy office space near each other because they are from different departments. Therefore, assessment is creating a context for conversations that are benefiting these programs.

In conclusion, although the use of assessment results could be more pervasive, the fact that progress continues to be made yearly indicates that the university is headed in a direction that has the potential of leading to more meaningful and useful assessment work.

Activity 3: Initiate and monitor, where appropriate, the assessment of graduate student learning outcomes.

At this time, the initiation of graduate assessment has occurred in several areas:

  • The Graduate Studies office conducts an exit survey of masters and doctoral degree recipients. This survey asks outgoing graduate students' about their perceptions of and satisfaction with the adequacy of their preparation across a variety of domains. The survey also collects other indicator data such as average time to degree and placement after graduation. The Graduate Studies office formally offered to make these data available to colleges and departments for their assessment purposes.
  • The College of Arts and Sciences began to assess their graduate programs in the 1997-98 academic year. This colleges accounts for 26% of the graduate degrees awarded annually by the University of Nebraska - Lincoln. Because of this their initiation of graduate assessment is particularly important. Appendix D contains a narrative highlighting some of the results of these efforts undertaken in 1998-99. In addition, in July the Office of Graduate Studies asked all graduate programs to provide a brief synopsis of their assessment plans and activities to date. This synopsis might include a copy of an already developed assessment plan or a summary of the plan in development.

Activity 4: Monitor assessment of the Comprehensive Education Program (CEP)

Plans for assessing the Comprehensive Education Program (CEP) include indirectly measuring the outcomes of the program with institution-wide surveys and using conversations initiated in the Peer Review of Teaching and Assessment project to facilitate the direct measurement of student outcomes.

A. Institution-Wide Surveys

In 1999-2000, the Bureau of Sociological Research administered freshman and senior surveys previously given in 1997-98. The freshman and senior surveys administered in 1997-98 provided a benchmark for future assessment of the program. Since the program was first applied to freshman entering in fall 1995, seniors surveyed in 1997-98 had not fully participated in the program and freshman surveyed at that time had just begun to fulfill the curricular requirements of the program. Although the freshman surveyed this year are in a similar position as freshman surveyed in 1997-98, the seniors surveyed this year will have completed the requirements of the program. Therefore, it can be determined how the program benefits students who have completed it by comparing the surveys conducted this year with the surveys conducted in 1997-98.

In addition, as requested by the colleges, a sufficient sample of freshman and seniors were surveyed from each college to permit responses to be separated by college. This separation of results by college makes the data about Essential Studies more useful since the course requirements for this CEP component varies with each college.

A full report on the results of these surveys is in Appendix E and will be discussed with the Undergraduate Curriculum Committee who is responsible for CEP. In the future, surveys will be administered every two years so that determination of the effect of the CEP program can be made over time.

B. Peer Review of Teaching and Assessment Project

The intention of the Peer Review of Teaching and Assessment project is to facilitate the direct measurement of the CEP program. Modifications to the University's original plan for directly assessing the CEP program were made in recognition of the need for greater faculty involvement in the assessment of the Comprehensive Education Program.6 This effort has been supported with funding from the William and Flora Hewlett Foundation. This support has provided the departmental teams involved in the project a stipend for focusing on the teaching of CEP courses in their discipline and the goals related to it.

The original Peer Review of Teaching project involved faculty teams developing individual teaching portfolios based on an exploration and discussions about their individual courses. The pilot project in 1998-99, differed from the original project because faculty teams from a department were asked to focus on the learning objectives for a set of courses rather than a single course in their discipline. In addition, these courses represented significant Integrative Studies (courses intended to develop critical thinking and communication skills) and Essential Studies (courses within eight knowledge domains) course offerings for the general education program at UNL. The 1999-2000 departmental teams were asked to determine during the first year of the project the integrated goals, methods, and student learning outcomes for a specific set of courses in their discipline. Although departmental teams selected the courses on which their discussions would focus, they were asked to include core courses in the department's major(s) that served either the Integrative Studies and/or Essential Studies general education requirements for both majors and non-majors. The final product from departmental discussions will be course portfolios that serve the same foundation for the scholarship of teaching and learning that articles in academic journals do for the scholarship of research. The creation of a foundation for the scholarship of teaching and learning is important because what makes scholarship is information that is accessible for review so that others can build upon that knowledge4. The course portfolios therefore provide a streamlined version of this foundation without having to reinvent the academic journal.

During the second year, conversations about the definitions of Essential Studies objectives will occur with past participants across disciplines. This sharing of information outside the discipline will begin to form a foundation for the establishment of definitions and learning objectives for general education requirements in key service courses. Involving project participants in discussions about the objectives of Essential Studies provides faculty who have developed a dialogue about the curriculum within their own discipline to expand that dialogue with other related disciplines.

There have been several lessons learned from the faculty conversations facilitated by the project thus far. First, a connection between quality teaching and the assessment of student learning outcomes is logical and necessary. Second, sharing perspectives about teaching and student learning with colleagues both within and outside a discipline leads to rewarding conversations. The reward lies in providing faculty recognition and input on their own courses and a deeper understanding of how their courses are contributing to or benefiting from the learning occurring in their colleagues courses. Third, the importance of developing a shared language for discussing teaching. Fourth, the value in establishing a group of colleagues for discussing teaching and student learning both within the department and outside of it. These lessons are being expanded with the support of the Pew Charitable Trusts. Through a four-year grant, UNL is collaborating with the universities of Michigan, Indiana, Texas A&M, and Kansas State to create a system of sharing ideas and issues with colleagues at other institutions yet in similar disciplines.

Another set of departmental teams with significant Integrative Studies and/or Essential Studies course offerings will be recruited this summer to complement and continue the foundation set by the past and present teams. This year has consisted primarily of creating the appropriate context for accomplishing the project's goals.

Activity 5: Communicate useful assessment information to university faculty and administration.

A website, the university-wide assessment committee, the mid-cycle review, and the Teaching and Learning Center are four avenues used to communicate assessment issues and ideas with university faculty and administration. All four mechanisms have and can contribute to the progress of assessing outcomes at UNL in different ways as described below.

A. Website

The website (assess_reports.shtml) lists the University-wide annual assessment reports and CEP assessment reports. These reports provide an annual overview of the institution, college, and departmental efforts to assess student learning. This overview spurs discussions needed to determine the future direction of assessment efforts.

B. University-Wide Assessment Steering Committee

The University-Wide Assessment Steering Committee is the body who discusses issues and solutions for guiding future assessment efforts and communicates information and changes about assessment with the college or division that they represent.

C. Mid-Cycle Assessment Reviews

In the 1999-2000 academic year the mid-cycle review discussions and reports, as previously discussed, have been used to share resources and examples that will assist faculty in departments in better determining the assessment methods that will work best for them and how best to implement those methods. In addition, the review has been an excellent mechanism for clarifying better the purpose of engaging in outcomes assessment activities and the expectations of the NCA and regional accrediting agencies.

D. Teaching and Learning Center

The Teaching and Learning Center provided several avenues in the 1999-2000 academic year for sharing ideas and strategies for outcomes assessment efforts. Two articles about assessment appeared in the Teaching and Learning Center newsletter. One article titled "Learning Outcomes Assessment at UNL" (October 1999) describes the purpose and value of engaging in the assessment of learning outcomes. Another titled "Writing Learning-Centered Objectives" (November/December, 1999) presents practical advice for writing objectives for a course that focus on student learning rather than the goals of the teacher. The Teaching and Learning Center also hosted two workshops related to assessment. A 2-day workshop by Barbara Walvrood was sponsored jointly with the College of Agricultural Sciences and Natural resources in January 2000. This workshop covered ideas and strategies for grading student learning in the classroom and for departmental assessment. The second workshop titled "Learning-Oriented Classes: Connecting Goals and Objectives to Assignments and Assessment Tools" was given by TLC consultant Michael Anderson in April 2000

Activity 6: Plan for the focus visit by North Central Association (NCA) on the progress of outcomes assessment at UNL.

The NCA focus visit on the progress of assessment of student learning outcomes at UNL occurred on October 25-27, 1999. UNL was formally notified that it has been reaccredited through 2006-07 by the NCA (See Appendix F). UNL will make a progress report to the commission in December 2002 on the continued implementation of the assessment of student learning outcomes.

Conclusions and Recommendations

The 1999 University-wide Annual Assessment report concludes that UNL's efforts to assess student learning outcomes have come a long way. Our progress on outcomes assessment continues to evolve as many individual units have begun to use assessment evidence to reflect on their educational programs. This progress is highlighted in the college's annual assessment reports. In addition, outcomes assessment evidence is beginning to make a greater contribution to program self-assessment at the time of Academic Program Reviews and/or Accreditation. To continue this progress we should:

  1. View outcomes assessment as a continuous process of reflecting on and improving our educational programs
  2. Communicate the expectations and benefits of engaging in an effective assessment process.
  3. Identify strategies that reward and recognize meaningful outcomes assessment processes.
  4. Provide assistance to departments/programs struggling to plan and implement an assessment process that can be useful to them.
Attention to these four focuses will continue to support assessment that will be meaningful and useful in anticipation that the benefits of assessment will then be communicated.

Based on the conclusions of this report the following recommendations will serve as a guide for the University-Wide Assessment Steering Committee's actions during the 2000-01 academic year:

Recommendation 1: Continue to monitor the integration of outcomes assessment evidence into UNL's program planning/budgeting process which includes Academic Program Review and External Accreditation that is discipline-specific;

Recommendation 2: Consider the results of faculty conversations centered around CEP with the Peer Review of Teaching and Assessment Project;

Recommendation 3: Continue to facilitate the development and annual reporting of college, departmental, and program assessment activities;

Recommendation 4: Support efforts to plan and implement assessment of graduate student learning outcomes;

Recommendation 5: Communicate the benefits and share the successes of engaging in an effective assessment process.


Appendix A - Common Misperceptions about the Outcomes Assessment Process
Appendix B - Copy of Dean's letter requesting Annual Assessment Reports
Appendix C - Effects of the assessment of undergraduate outcomes
Appendix D - Effects of the assessment of graduate outcomes
Appendix E - Annual CEP Assessment report
Appendix F - Reaccreditation Letter from North Central Association