This module provides an overview of the Methodology for Designing a Program Assessment System (PAS) and explains how to use it. Before one begins to design a PAS, one should examine all of the steps in the design methodology in order to gain an understanding of the entire design process. In addition to presenting the complete methodology, this module identifies and briefly discusses the critical steps faculty and administrators find particularly challenging. The modules that follow in this section further explain the stages of the methodology.

Designing a Quality Program Assessment System

The Methodology for Designing a Program Assessment System is given in Table 1. While the steps of the methodology are listed in a sequential fashion, in most cases it is necessary to revisit and update previous steps while working through the methodology. The purpose of the steps can be broken down into five stages:

  • Specifying and defining the program (Steps 1-6)
  • Establishing program quality (Steps 7 and 8)
  • Designing annual program assessment (Steps 9-11)
  • Constructing a table of measures (Steps 12-15)
  • Documenting program quality (Steps 16-20)

Stage 1 Specifying and Defining the Program

As a program continues to evolve, it is important to step back and truly understand what the program is about. Stage 1 of the methodology focuses on the key aspects of the program defining and specifying components to include the essence, goals, limitations, assets and important processes.

A significant benefit of designing and implementing a program assessment system is that it gives the stakeholders of the program the opportunity to clarify the identity of the program clearly and publicly. Through this action the stakeholders both claim ownership of the program and also limit others from imposing an identity on the program. This benefit is realized through the straightforward, yet challenging act of stating the essence of the program (Step 1). The essence statement should be a one-sentence description of the program (as it presently exists) including the processes used and the products produced. Then, building on the essence statement, the program stakeholders are identified (Step 2) and the scope of the program is defined (Step 3). A key component of the program specification is the identification of the current and future goals of the program (Step 4). Once one has a clear understanding of the goals, it is a relatively straightforward process to identify the top products or assets of the program (Step 5) and to define the processes to be used to accomplish the goals (Step 6).

Stage 2 Establishing Program Quality

The primary goal of a PAS is to enhance the quality of the program. In order to measure the quality of any program it is important to state performance criteria for that program (Step 7). A strong criterion statement is stated clearly and concisely and supports one or more of the desired qualities of the program while suggesting at least one context for measurement. The objective is to identify 3-8 areas of the program that account for most of the quality of the program. The performance criteria will serve as the basis of the program assessment system by providing the framework for identifying specific attributes to be measured.

Writing performance criteria is one of the most challenging aspects of the PAS design process. In particular, many individuals have trouble seeing the connection between a quality, the meaning (or analysis) of that quality, and how to express the meaning in the form of a written performance criterion. In addition, there is a common tendency to begin determining performance standards rather than focusing on identifying areas of quality in the program. It is important to identify key characteristics that determine quality for the products and processes. Using this list of characteristics, critical areas for measurement are identified and prioritized. Then the main areas of quality are clarified as statements (performance criteria) along with measurable attributes for each criterion (Step 8). To facilitate writing quality performance criteria a detailed methodology has been developed (1.5.4 Writing Performance Criteria for a Program).

Note that writing the performance criteria for a program parallels the process of writing the performance criteria for a course or a learning activity.

Stage 3  Performing Annual Program Assessment

It is important to shift from thinking about doing assessment (planning) to actually implementing an assessment system. This is not an all-or-nothing process, and it is not necessary to wait until the PAS design process is completed before initiating assessment. Once the performance criteria have been identified, a pragmatic approach to implementation is to design an annual assessment report around the performance criteria. Begin by assessing the program for the previous academic year (Step 9). The SII Method (4.1.9 SII Method for Assessment Reporting) provides a useful format for this self-assessment. At this point in the process, it is important to include all of the stakeholders in the program assessment  process (Step 10). Complete the assessment by generating an annual assessment report (Step 11).

Once you apply the performance criteria to the performance of the program over the previous year, it becomes much more evident how you should progress with the next step (designing measures). Further, the annual assessment report will serve as a model for annual reports generated in the future.

Stage 4 Constructing a Table of Measures

The heart of the mechanism for measuring quality is the "Table of Measures" (Table 2); it is a template for completing the PAS design process. It focuses on what really matters in the program: the measurable characteristics (or attributes) that align with the performance criteria (from Steps 7, 8, and 12). An essential component of the process of building the table of measures is the act of prioritizing and weighting the attributes to identify the most important while eliminating the non-essential ones (Step 13).

For each attribute, determine whether an instrument exists to measure performance (Steps 13-15). Examples of instruments include rubrics, alumni surveys, grants, publications, retention and graduation data, placement data, satisfaction surveys, and portfolios. If no instrument exists for a given attribute, then one must be built.

Stage 5 Documenting Program Quality

The final stage in the methodology focuses on the documentation of the program quality through the tracking of the quality of the attributes. For each attribute, it is helpful to make comparisons with benchmarks of current performance and with targets established for future performance (Step 16). In order to share and distribute the responsibility for meeting the targeted performance levels, it is also important to assign the accountability for each attribute to a specific program member (Step 17) and establish criteria for measuring overall success (Step 18).

Before the program assessment system is fully implemented, all participants and stakeholders involved in the program should be given the opportunity to provide assessment feedback that includes strengths, areas for improvement, and insights (Step 19). In addition to improving the quality of the program assessment system, this helps to build commitment and trust which is essential for the successful implementation of the system.

Assessment is a vital component to methodologies. Assessment provides the feedback mechanism which allows for building upon strengths and taking action to make improvements. It is important not to overlook the need to assess the program assessment system itself. Thus, a complete assessment system involves using various forms of assessment (formative, summative, and real-time) on all aspects of the program and on the program assessment system itself (Step 20).

Concluding Thoughts

The benefits to a program that are derived from a well-designed and successfully implemented program assessment system significantly outweigh the time and energy invested in the design of the system. Nevertheless, the design process can be an intimidating impediment to establishing an assessment-based program. The Methodology for Designing a Program Assessment System provides a clear progression of steps to assist even a novice in this endeavor. The result will be an efficient program assessment system focusing on the key attributes that determine quality performance.

Table 1  Methodology for Designing a Program Assessment System

Specifying and Defining the Program

Step 1

Write a one-sentence description which captures the essence of the current program.

Step 2

Identify all program stakeholders and their interests.

Step 3

Define the appropriate scope (boundaries) of the program; what it is, and what it is not.

Step 4

Identify the top five current goals and five future goals for the program; use a three to five year time frame.

Step 5

Identify the top five products or assets of the current and future program.

Step 6

Provide a description of key processes, structures, and systems associated with the program which will help accomplish the current and future goals from Step 4.

 

Establishing Program Quality

Step 7

Write clear performance criteria that account for most of the quality of the program.


Methodology for Writing Performance Criteria:


  1. Brainstorm a list of characteristics/qualities (and values) which determine program quality.
  2. Check with other programs/stakeholders to determine whether any key characteristics are missing.
  3. Rank the top ten qualities for the future design of the program.
  4. Select the critical areas for measuring; prioritize to just a few (7-10), consolidating highly related qualities.
  5. For each quality, identify a set of three to five important aspects.
  6. Write statements illustrating the performance expectations that produce these qualities by describing the important aspects of the performance.



Step 8

Identify up to three attributes (measurable characteristics) for each criterion.

 

Annual Program Assessment

Step 9

Self-assess the program for the previous academic year.

Step 10

All stakeholders should provide feedback (strengths, areas for improvement, and insights) about the performance of the program.

Step 11

Produce an annual assessment report.

 

Constructing a Table of Measures

Step 12

Create the structure for a table of measures (Table 2). Fill in the first two columns (criteria and attributes) with information from Step 7 and Step 8.

Step 13

Prioritize the attributes by appropriately weighting each attribute.

Step 14

Identify a means for collecting data.

Step 15

Identify a key instrument associated with each chosen attribute to measure the performance reflected in the data collected.


 

Documenting Program Quality

Step 16

Determine current benchmarks and future targets for each attribute to document annual performance.

Step 17

Assign accountability (to an individual) for each attribute to assure that targets for performance are met.

Step 18

Create an index for measuring overall success.

Step 19

Obtain stakeholder buy-in of the program assessment system by asking them to assess the system.

Step 20

Annually assess the program assessment system.

Table 2  Table of Measures

Criterion

Attribute

Weight

Means

Instrument

Benchmark

Target

Accountability