|
|
Implemented Actions since the HLC 2002
Visit
In view of the suggestions given by the visiting team concerning
criterion three of the 2002 self-study, the assessment committee
reconvened with a renewed desire to create a comprehensive,
faculty-driven, institution-wide plan that focused on using
authentic means of assessing student achievement. The committee
decided to work on two areas of immediate concern:
-
a perceived lack of clearly-defined general education objectives
agreed upon by faculty, and
-
specific goals for transfer and career programs which
incorporated course outcomes that were aligned to the mission of
the College.
General Education Competencies (2002-2003)
To achieve the first goal, faculty were approached during a May
2002 faculty forum and asked to discuss those elements that should
be part of any program deigned “general education. Faculty were
charged with the question: What should graduates be able to do? From the generalized list created that day, faculty were polled to
determine the most common competencies that were both taught and
expected by instructors in classes across the College.
In August 2002, the fall in-service focused on assessment, and
faculty reviewed the existing general education competencies and refined the new proposed
competencies. During the 2003 spring semester faculty approved the
six major general education competencies: communications,
problem-solving, quantitative skills, research, ethics, and
technology. Cross-discipline teams were formed and met during
faculty forums to discuss the new competencies and identify
specific objectives for each. In March 2003, faculty completed
General Education Alignment Forms, which asked faculty to identify
from the list of six which competencies they taught and assessed in
their classrooms. & These data were aggregated to determine
coverage and breadth of the described competencies and
objectives.
From these six competencies, and from the prior general
education statement in the catalog description of our distribution
requirements, the Core Team created the general education
philosophy statement, which reads as follows:
Transfer and Career Program Goals (2003-04)
Once the faculty had established and refined the general
education philosophy and competencies, the next step was to develop
a practical, simple, and effective way to collect consistent data
about general education and transfer and career goals at all
program levels. In the fall of 2003, the Assessment Committee was
charged by the Vice President of Instructional Services with
developing a systematic method of collecting program data. The
first step was to conceptualize the data streams from classroom to
degree.
Illustration 12: 2003 Institutional Goals and
Data Flow Chart
Illustration 11 is an early graphic representation that guided
subsequent design steps. Although details of the system evolved
slightly away from this model, the four exiting assessment system
goals are represented: Transfer; Career; General Education;
Developmental.
The decision was made at that time to continue administering the
CAAP exam for longitudinal purposes. For internal program data, it
was determined that the committee would proceed from an established
model, thus ensuring the likelihood that the model would have
resource material and previous examples of successful
implementation. Of particular concern to the committee was that the
model be flexible enough to allow faculty to determine what
assessment tools and events would be best suited to their
particular content-area objectives, without compromising the
ability of the collected data to be aggregated for inclusion in the
discipline, area, and program levels.
The committee eventually agreed that the basic model given in
James Nichols’ A Practitioner’s
Handbook for Institutional Effectiveness and Student Outcomes
Assessment Implementation4 would provide the groundwork
for further development of the plan. The model fulfilled two
objectives of the committee: it provided data for multiple levels
of instruction and program analysis, as well as created an
expectation for using the results of assessment events to make
measurable change in instruction and practice. The committee felt
strongly that most faculty were already using assessment
instruments to measure student learning outcomes, albeit likely
that faculty hadn't considered their objectives and measures as
part of the larger plan of institutional assessment. The committee
believed that faculty ownership, and subsequently faculty buy-in to
the assessment process, would be greatly advanced if faculty were
allowed to continue to use instruments they were comfortable with,
or develop instruments which fulfilled both the needs of the data
collection process as well as the needs of the instructor in daily
classroom management. The Nichols model allowed for variation
without sacrificing cross-discipline and cross-college
investigation.
In October 2003, it was determined that the bulk of the remaining
design work was largely focused on academic issues, and the
assessment committee created a sub-committee with just faculty
representatives. This Core Team was charged with designing and
maintaining the assessment system.
4 Nichols, James, A Practitioner’s
Handbook for Institutional Effectiveness and Student Outcomes
Assessment Implementation, 3rd ed. (New York: Agathon
Press, 1995).
In January 2004, the Assessment Committee introduced the first
version of an assessment
reporting template that allowed faculty to consider their
courses, disciplines, and areas from two major perspectives :
1. The template required identification of either a transfer
goal or career goal, depending upon the focus of the discipline.
From that overarching goal, faculty members were asked to identify
those specific objectives used to ensure students met those goals.
For each objective, faculty were then asked to identify what
assessment tool or event would determine if students had met the
stated objectives. Finally, faculty members were asked to identify
the criteria that constituted success based on their instrument of
choice.
2. The template required that faculty identify what general
education competencies were being assessed in their classes. For
each competency identified, faculty members were asked to identify
an instrument they used to measure student success in performing
the described competency, and to elaborate on what constituted the
criteria for success based on their chosen instrument.
Each discipline and area plan incorporates
objectives that have been agreed upon by faculty in that discipline
and area. The objectives reflect the most common thread of the
individual instructors's objectives, which bridges the gap between
instructors teaching and assessing in isolation, and the
collaboration of faculty in determining how best to approach
instruction at all levels. Aggregation of data relating to the
criteria for success or lack thereof from the individual plans
establishes whether or not disciplines and areas are meeting their
common objectives.
This model served two purposes which the committee agreed were
equally important: on the one hand, data collected can be in any
initial form which allows significant latitude for qualitative
assessment, a process which many felt is much more vested and
authentic than measures which are always statistically
quantifiable. On the other hand, translating the initial data into
more easily quantifiable “measures of success†based on common
criteria allows the institution to follow the thread of student
academic achievement from the individual classroom all the way to
the mission of the institution in a manageable way that can be
summarized for presentation. The plan, essentially, allows for both
reflection and improvement at the course level as well as the
institutional level, without compromising the integrity or
effectiveness of either.
During that spring, deans began meeting individually with
instructors and discipline groups to facilitate the development of
the course objectives. The Core Team, meanwhile, began designing
the first data collection forms, with the goal of creating a
digital assessment system to avoid a cumbersome paper-based system.
By the end of the 2004 spring semester, plans were underway to
begin the data collection cycle. That March, the president approved
funding for release time for the Core Team to manage the increasing
workload.
Summer of 2004—Connecting Processes and Bringing Faculty
Together
The summer of 2004 was the most expansive growth period for the
assessment system” the College's reorganization into a learning
college created procedures and policies which formalized the
proposed assessment system. Membership and duties for the various
committees and subcommittees were articulated and documented. The
Core Team, newly expanded to six members, introduced the new
intranet server (known locally as the Assessment Folder) to house
the assessment report forms and revised templates and data
collection forms. The
Core Team also proposed the addition to the developing committee
structure of a General Education Subcommittee, whose main purpose
was to collect and report results from the ongoing CAAP testing
process, as well as to collect aggregate general education data
from classroom report forms. Each faculty member received a copy of
the Academic Assessment Handbook for Faculty: An Introduction
to the System (
see Appendix O). An assessment resource room was
established and a collection of resource materials made available
for faculty use.
In June, consultant Regis McCord gave a faculty workshop on
using rubrics in assessment events. In July, administrators and
faculty members who attended the Nichols Institutional
Effectiveness Institute brought back an increased awareness of how
the academic assessment stream feeds into the strategic planning
process. The Core Team noted that adaptations made to the Nichols
model were in the best interests of the Sauk community and
increased the institution's ownership of the process. At this time,
it was determined that the success of the assessment program relied
on having representatives from each area who would be able to
organize and communicate with faculty about the data collection
process and subsequent discussion and implementation of
instructional improvement. The nine Area Facilitator positions were
created, and the facilitators were given training on the system to
allow them to better help their faculty through the first data
collection cycle. |
|