1997 Annual Report
University of Nebraska–Lincoln
Strengthing the Role of Student Outcomes Assessment in
Academic Program Review and Accreditation
Revised May 1998
Outcomes Assessment and Academic Program Review
In the revised UNL assessment plan submitted to the NCA in 1996, it was stated that "every academic degree program and the curriculum on which it is based will now routinely and systematically engage in the assessment of student learning" (p. 5) and that assessment findings would be incorporated into the Academic Program Review process. Basic requirements for college/program assessment plans and annual reports were specified. Also, APR guidelines were revised in order to explicitly include assessment plans and results. The following excerpts from the UNL plan describe these steps in more detail:
"The revised APR guidelines (1995) specifically require the department's self-document to include goals and plans for assessing student academic achievement within the major(s), graduate program(s), and Comprehensive Education Program as well as statements of desired student outcomes. In addition, departments have responsibility for producing evidence of student learning in relationship to these desired outcomes:
Describe the department's plan for assessing student learning in the major, the graduate program, and in the Comprehensive Education Program courses delivered by the department. The department's student outcomes assessment plan should include at least two measures/indicators of learning for each program. [Examples might include standardized tests, locally developed tests, surveys of employers of graduates, capstone courses, portfolios, performance or exhibition appraisals, etc.]
What evidence is there that students in the program(s) have learned the material expected or identified in the program objectives?
These revised guidelines have been provided to academic units for use during the 1996-97 academic year. Program and unit administrators have prepared, as part of the Academic Program Review process, an assessment plan for their respective undergraduate and graduate programs. These program plans identify methodologies to be employed, issues to be addressed, frequency of administration, scope of administration (e.g., whether a procedure will be used for all graduates or just for a sample), timeline for implementation, procedure for using results and potential impact on curriculum, programs or structure, provisions for feedback to students and for explaining to them the purposes of the assessment procedures and other relevant matters...
...Program directors and chairs are to report annually on assessment activities, actions taken in response to assessment findings, and eventually, the results of these actions with respect to improving student academic achievement." (pp. 13-15)
"Because specialized accreditation is not a guarantee that assessment is in place, units undergoing specialized accreditation are required to provide assessment information consistent with the conditions outlines in the academic program review guidelines for UNL."(p. 16) [emphasis added]
"During the program review process a conscious effort is made to ascertain whether or not faculty in the particular programs under review have ownership of the assessment of student learning." (p. 22)
"As the university matures in its development and incorporation of assessment practices, it is anticipated that both the areas and methods identified in each program plan will be modified, refined and adjusted to improve reliability and validity. At this stage of development the goal has been to create measures that are reflective of program content and objectives (cognitive and non-cognitive) and that rely on methods of assessment that are practical and informative." (p. 23)
The office of the Vice Chancellor for Academic Affairs has a responsibility to ensure that guidelines relating to assessment information have been followed by each unit in its academic program review. Given that the effectiveness of UNL's assessment plan hinges upon assessment being successfully integrated into the APR and accreditation processes, having a way to encourage and monitor this integration is needed. The following plan assumes that units have implemented assessment plans having characteristics described in the university plan and that they have documented their assessment activities in annual reports to their dean.
Programs Undergoing Academic Program Review
Mid-cycle: The core of the plan consists of a focused review of the program's assessment activities to be conducted by the University-wide Assessment Coordinator. This review would occur at approximately the midpoint between APRs, which typically occur every 5-6 years. Administrative details are yet to be finalized and may vary to some extent across programs, but it is expected that the Assessment Coordinator will review the program's most recent self-study and subsequent assessment reports, meet with the faculty committee overseeing assessment activities in the program, and produce a written analysis of the program's assessment plan and implementation. The criteria that will serve as a framework for the review are given below. It is intended that the review be developmental in focus and include suggestions for strengthening the assessment plan. The Assessment Coordinator will also provide follow-up support (e.g., technical assistance in developing or refining tools, training for faculty in refining objectives or developing scoring schemes) as desired by the program in preparation for their next APR.
During APR: The Assessment Coordinator will review the self-study for evidence that student outcomes assessment is an integral part of the unit's process of program improvement. It is expected that self-studies will include information about both the assessment process and use of findings in decisionmaking. No formal report will be made to the program at this time, but a summary of how information from the assessment process was used in the self-study will be included in that year's University Assessment Report.
Programs Subject to External Accreditation
A review of assessment activities in these units will occur at the midpoint in their accreditation cycle, with follow-up support provided as desired to help them prepare for their accreditation review. If an entire college is accredited as a unit, it may choose to coordinate the assessment reviews of all its programs at the midpoint of the accreditation cycle. This may be especially appropriate for colleges that conduct many assessment activities on the college level or that consist of programs having overlapping goals and/or measures. Criteria would be the same as those used in the APR process. At the time of the accreditation review, a summary of how information from the assessment process was used in the self-study will be included in that year's University Assessment Report.
Anticipated Benefits of Changes
Depending upon accreditation standards and the training and interests of external review teams, there may be no mention in a team's report of the assessment efforts of the faculty, which can convey the impression that such efforts are not valued. This plan ensures that at some point in the program review process, assessment activities are focused upon. In addition, employing explicit criteria makes it clear that the university has standards that should serve as goals for programs as they develop and refine their assessment plans. Nevertheless, despite this degree of standardization, faculty are left great latitude in determining the objectives to be measured and how to measure their achievement.
With responsibility for assessment activities changing frequently, annual reports alone are unlikely to convey the broader picture of how or whether assessment is contributing to the growth of a program. Instituting a formal assessment review is intended to encourage reflection upon the cumulative effects of the assessment process. The mid-cycle review emphasizes the university's commitment to a process of outcomes assessment that provides the information necessary for formative program evaluation.
Criteria for Assessment Review
The following criteria will be used in reviewing each unit's assessment activities. They extend current guidelines and have been developed to be consistent with standards currently in use by accrediting bodies and with recommendations of professionals in the field of outcomes assessment. Following the criteria are three appendices. Appendix A presents a tentative schedule of mid-cycle reviews, based upon what is currently known about APR and accreditation schedules. Appendix B summarizes the extent to which the criteria reflect current written standards of UNL colleges and various accrediting agencies. Appendix C provides excerpts from these standards that are relevant to each criterion.
The plan includes assessment of student learning outcomes in
- the undergraduate major
- graduate programs
- the Comprehensive Education Program: a) the program's unique contribution to the CEP program for non-majors and b) progress of the program's own majors toward CEP objectives. These two aspects of CEP will not be equally relevant for all units, and assessment plans should reflect an emphasis appropriate to the discipline.
It is not necessary for a unit to include all these areas in their initial plan, nor is it necessary to assess all areas every year. However, in reviewing a unit's plans and activities over a period of years, it should be evident that the assessment process encompasses all three areas.
II. Program Objectives
The plan should include written goals linked to the department mission and reflecting all important learning outcomes (these may be affective as well as cognitive). It is desirable that these goals be general enough to reflect the importance of the outcome yet sufficiently specific to be measurable. This may be accomplished by writing goals in the form of broad outcomes, but then developing specific, measurable learning objectives for each goal.
III. Measures/Data Collection
Assessment activities should:
- include more than one type of instrument (multiple measures)
- include at least one direct measure of student achievement
- produce information that is sufficiently reliable and valid for its intended use (i.e., attention should be paid to the technical quality of instruments and/or data-collection procedures). The appropriate degree of rigor in instrument development is determined by both the priority given to the objectives being measured and whether other measures provide information related to the same objectives. An instrument that is the primary or sole source of information should have more resources devoted to its development. The explanatory note at the end of this document provides details of the characteristics of exemplary assessment plans with respect to this criterion.
- when considered as a long-term plan extending over several years, be comprehensive enough to provide information relevant to all program goals
- be cost effective (i.e., information gained justifies faculty and student time expended)
IV. Administrative Structure
There should be evidence that:
- there is widespread faculty involvement in the assessment process
- a person or committee is formally responsible for overseeing the assessment process
- a formal mechanism exists for disseminating assessment results to faculty
V. Analysis and Reflection There should be evidence that:
- assessment information is used for decisionmaking, specifically to:
- improve the program
- address questions of interest to the faculty
- address questions raised in the last APR
- there is follow-up evaluation of the effectiveness of changes made on the basis of assessment information
- the assessment process itself is routinely reviewed and refined
Click here for Tentative Schedule for Focused Assessment Evaluations (4/98)
With respect to issues of the quality of data, exemplary assessment plans:
a. are consistent with sound psychometric practice in developing in-house tests. For example:
The coverage of the test in terms of depth, breadth, and emphasis accurately reflects the objectives it is intended to measure.
Care has been taken to ensure that performance on the test does not depend on factors that are irrelevant to the objectives being measured. (An example is using a format that is unfamiliar to students.)
The items are appropriate in difficulty given the time when testing will occur. (A test measuring mastery of recently taught material may look quite different from one that measures retention of the same material at the end of the program.)
The test development process includes pilot testing, analysis, and revision of items. (For a small program, this may be a process that extends over several years.)
The test items are likely to be correctly answered by a student who has mastered the subject matter and incorrectly by a student who has not.
b. investigate both the technical merits and curricular match of commercially produced tests, if these are used. Relevant questions to ask include:
Does the content coverage match the objectives to be assessed and measure them in a manner consistent with this curriculum? To determine this, it is important that faculty review the actual test items as well as the test's table of specifications.
If subscores are reported for different content areas, are they based on enough items to be accurate indicators of performance in those areas?
Was the group of examinees used to develop the test large enough for reported statistics to be stable?
Were the examinees used in developing the instrument (and establishing norms, if these are of interest) similar to this program's students? (For example, a test used for graduate school admissions will probably be developed using only students who apply to graduate programs, while you may be interested in the performance of a much broader student population.)
Will modifications be made to the test in its administration? (For example, will items be deleted, time limits changed, or aids such as calculators or reference materials allowed that are not part of a standard administration?) Such changes make it inappropriate to use the published norms in interpreting the scores and may affect what is being measured by the test.
c. standardize questions such as those used for exit interviews or for evaluating student projects or portfolios
d. for non-objectively scored achievement measures, use multiple raters for at least a sample of the student products and check the agreement of scores across raters
e. motivate students to give their best performance (integrating assessment activities into the student's program in such a way as to provide incentives)
f. ensure that when sampling is used, the group of students assessed is representative of the entire program