In constructing a table of measures for a program, one must attend thoughtfully to the program assessment activities outlined in 1.5.3 Defining a Program, 1.5.4 Writing Performance Criteria for a Program, and 1.5.5 Identifying Performance Measures for a Program. Tables of measures link performance criteria for a program, important attributes to be measured, measurement systems for acquiring data, and the identification of those who are responsible for producing specific program outcomes. The table is formatted so that a wide variety of program stakeholders can use it as a quick reference. This module describes the steps involved in constructing a table of measures, explaining each step using the example of an academic affairs program that is focused on student success.

Role for a Table of Measures

The table of measures is a template that summarizes multiple steps in the process of the 1.5.2 Methodology for Designing a Program Assessment System. It is intended to be used as a quick reference for those who are intimately involved in designing the program, and for stakeholders whose actions are critical to program success but who may have been only indirectly involved in its design: for example, faculty, parents, students, advisory boards, and accrediting organizations (Burke, 2004). To assess, one must observe performance and rate the quality of the performance based on specified criteria; one must also collect and analyze data and other evidence (Hollowell, 2006). To provide high-quality feedback that can be used to improve future performance, an assessor should also measure and analyze a particular outcome (Walvoord, 2004). As such, the table of measures should capture the essential indicators of program quality, identify what needs to be measured, specify how and when measurements should be taken, and identify the persons responsible for assuring quality in each area (Middle States Commission on Higher Education, 2005 and 2006).

Table Structure

A table of measures consists of six vertical columns; they are labeled "criterion," "attribute," "weight," "means," "instrument," and "accountability." The criterion column lists performance criteria. The attribute column describes what is going to be measured; in other words, the measurable characteristics that underlie each performance criterion. The weight column reflects the relative importance and rank assigned to each attribute. The means column identifies the appropriate vehicle or method that will be used for capturing the performance data. The instrument column identifies a specific tool or gauge that is selected to measure the performance. The accountability column identifies the individual responsible for delivering a quality result for each attribute.

Steps in Building a Table of Measures

To illustrate the process of building a table of measures, we will use as an example the Office of Academic Affairs (OAA) at a comprehensive, public, urban, land-grant university. This school has an open admissions policy, offers a variety of academic programs, and prepares students for certificates as well as associate, baccalaureate, graduate, and professional degrees. This university functions as a higher education state system and is charged with identifying and meeting the needs of local residents, institutions, and communities. Before assembling the table of measures, the OAA wrote an essence statement to describe the core values of the program, its purpose, and what makes it unique. They consulted stakeholders across and outside of campus, defined the scope of the program, ranked the top ten goals for the program, and analyzed the top five processes and products of the program. Performance criteria were than crafted, and up to three measurable attributes were identified for each criterion. Table 1 shows the OAA's Table of Measures.

Step 1—Organize Performance Criteria and Supporting Attributes

Align performance criteria and attributes by entering each selected attribute into a separate row. It is very important to select the areas of quality you really want to measure. The list of desired attributes is often long and therefore impractical to measure, so it essential to weight and prioritize them. In this example, the chosen criteria were "student-centered," "oriented toward student-success," "aligns with the institution's vision and mission," "supports professional development," "values the contributions of faculty and staff." Nine attributes are named to support these criteria.

Step 2—Weight Attributes

Assign a relative percentage weight for each attribute so that all of the percentages in this column add up to 100%. Consider dropping entries with low percentages or combining these entries to produce a new item that is sufficiently important. Continue to choose and iterate, adjusting weights for each attribute so that the table of measures accurately represents the priorities of the program. Reweight the column and then resort. Consider removing any attribute that receives a weight of less than 5%. This will usually produce 8 to 12 measures, each significant enough so that if efforts are made to improve them, it will definitely elevate the quality of the program. In the example of the OAA, most of the attributes are weighted at the 10% level or higher.

Step 3—Determine the Means of Measurement

For each attribute identify the most accurate and reliable means with which to collect the data you need in order to monitor progress or success. This step helps clarify what needs to be set up in order to collect data, when it will occur, and how it should be structured. This step is an important part of the planning process because it is often impossible to reconstruct past performance data. The only way we can measure the critical areas of performance is to be aware ahead of time when this information can be obtained during an academic year. It is important not to confuse the means for collecting data with the instrument that measures the data that is collected. The means is a vehicle or technique used to collect the data about a performance: the instrument is a particular tool or gauge used to measure the performance reflected in the data collected. For example, two means for collecting data are portfolios and surveys. Evidence of a student's problem-solving skill development may be collected using a portfolio. A rubric is a useful instrument used to measure a student's problem-solving performance. One might collect data about customer satisfaction using a survey. One might measure customer satisfaction using a satisfaction index.

The following examples illustrate attributes, means, and instruments for different scenarios.

attribute:

level of knowledge attained

means:

standardized exams, College Board Tests

instrument:

test score

attribute:

monetary per-capita expenditure

means:

budget

instrument:

discretionary expenditures/FTE

attribute:

student knowledge of tools for solving engineering problems

means:

Professional Engineering exam, survey of employers one year after graduation

instrument:

test score, satisfaction index

Step 4—Select a Key Instrument

Select a key instrument, tool, or gauge that is suitable for measuring performance in each attribute. For each means, determine if an instrument exists to measure each specific attribute. If no instrument exists, then one must be built. Test the instruments to determine their accuracy, precision, reliability, appropriateness or feasibility, and comprehensiveness with respect to their associated attributes. In the example of the OAA, many existing data collection instruments are invoked but in several cases the data is post-processed to more directly answer questions about the program attribute.

Step 5—Designate Owners for Each Measured Attribute

In order for the program to improve from its baseline performance to its target performance, it is important for the program to have a champion for each important selected attribute. Assign the responsibility for each attribute to a campus leader. These champions should be distributed across the program, but should have sufficient authority to remediate quality issues by redirecting budgets, manpower, and policies. If an attribute doesn't have a logical a champion, that attribute should probably be dropped from the table of measures. In the example of the Office of Academic Affairs, academic leaders in a diverse set of units are responsible for initiating data collection, analyzing findings, overseeing continued success, and implementing necessary changes to improve program quality.

Interpreting a Table of Measures

If we examine Table 1 and look at the criterion, "oriented toward student success," we see that OAA wants to give this area primary emphasis. That quality is parsed into the two most important attributes to be measured, namely, "1st.year retention rate" and "Graduation rate/program completion rate". The means for collecting the supporting data is an "Institutional Research Report." To measure the OAA's performance in increasing the "1st year retention rate," the instrument used is a chart of the number and percent of students returning for the following academic, broken down by demographic background. The instrument used to measure the "Graduation rate/program completion rate" is a chart of the number and percent of full-time and part-time students graduating or completing programs in two, three, four, five, or six years. Department chairs, OAA directors, and the Provost's office share responsibility for promoting and assessing student success on this campus. These responsibilities constitute a sizable portion of these job descriptions.

Concluding Thoughts

Producing a table of measures is deceptively simple. However, if it is to have long-term value, its creators need to invest time uncovering what is distinctive about a program, how this is manifested in a small set of key attributes, when and under what conditions each of these attributes can be measured, and who should take responsibility for sustaining quality in each attribute. This can only occur by thoughtfully navigating each of the steps in the 1.5.2 Methodology for Designing a Program Assessment System. When we make the investment to faithfully follow the methodology and to organize the results in a table of measures, we can assure many program stakeholders that the things we choose to measure are the things that really matter most.

References

Burke, J. C. (Ed.). (2004). Achieving accountability in higher education: Balancing public, academic, and market demands. San Francisco: Jossey-Bass.

Hollowell, D., Middaugh M. F., & Sibolski, E. (2006). Integrating higher education planning and assessment: A practical guide. Ann Arbor, MI: Society for College and University Planning.

Middle States Commission on Higher Education (2002). Characteristics of excellence in higher education: Eligibility requirements and standards for accreditation. Philadelphia: Author.

Middle States Commission on Higher Education. (2005). Assessing student learning and institutional effectiveness: Understanding middle states expectations. Philadelphia: Author.

Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments and general education. San Francisco: Jossey-Bass.


Table 1  Academic Affairs Program Focused on Student Success

Criterion (Quality) Attribute 
(Measure/Factor)
Weight 
(%)
Means
(Vehicle)
 Instrument
(Specific Tool) 
Accountability
Student-Centered Documented student learning outcomes for each program 10 Annual assessment reports Statistics on programs reporting and using outcomes Department chairs Assessment coordinator
Students' satisfaction with climate and support services 10 National survey of student engagement Weighting of responses to key questions Directors in Academic Affairs
Students' satisfaction with their college experience 10 Student course evaluations Weighting of responses to key questions Department chairs
Oriented toward Student Success Retention rate 15 Institutional research report Spreadsheet with demographic and academic data on students who do and don't return for the next academic year Vice provost
OAA direct reports
Graduation rate and program completion rate 20 Institutional research report Spreadsheet with statistics on number and percentage of students who obtain different degrees in 2, 3, 4, 5, and 6 years Department chairs
Aligns with the Institution's Vision and Mission Importance of OAA in working the strategic plan 10  Time allocated to student success in provost council meetings Web page with meeting topics and action items devoted to student success Provost
Supports Professional Development Added value to faculty and staff as facilitators of learning 10  

Workshop feedback

Workshop assessment forms Vice provost Assessment coordinator
Values the Contributions of Faculty and Staff Visibility of OAA student success stories 5  Faculty and staff annual activity reports Student success articles in annual newsletters and publications Vice provost
VP for advancement
Role in annual performance appraisals 10 Weighting of contribution to student success in faculty and staff salary determinations Scoring rubric Department chairs
Direct reports
Provost