Assessing Student Learning Outcomes

The commitment to excellence in instruction of each of the schools and campuses of the University of Pittsburgh requires a comparable commitment to a culture of assessment through which we continually evaluate the success of our academic programs and feed the results of those assessments back into our academic planning processes. 

Student Learning Outcome Goals

Each school’s and campus’ goals for student learning outcomes should be consistent with the University’s goals for all our graduates—that our students be able to:

  • Think critically and analytically
  • Gather and evaluate information effectively and appropriately
  • Understand and be able to apply basic, scientific, and quantitative reasoning
  • Communicate clearly and effectively
  • Use information technology appropriate to their discipline
  • Exhibit mastery of their discipline
  • Understand and appreciate diverse cultures (both locally and internationally)
  • Work effectively with others
  • Have a sense of self, responsibility to others, and connectedness to the University

Responsibility for Assessment

The following people and units are responsible for the specific assessment process:  

  • Program faculty are responsible for the development and administration of the assessment processes of individual programs in accordance with the appropriate programmatic or departmental governance structure.
  • Department chairs are responsible for coordinating the assessment process for departmentally based programs; deans, directors, and campus presidents are responsible for coordinating the assessment process for school- and campus-based programs.
  • Schools and regional campuses are responsible for developing internal procedures for documenting program assessment.
  • Deans, directors, and campus presidents are to report annually to the Provost on the school’s and campus’s assessment activities and relevant results as part of their planning process;

Guidelines for Documenting the Student Learning Outcomes Assessment Plan in the Matrix

Schools and campuses should ensure that there is a process in place to assess all academic programs. For each program and for their general education curriculum, schools and regional campuses should document the following components of the assessment process in the Student Learning Outcomes Assessment Matrix.

Note: Programs may request permission to substitute a professional accreditation process as the assessment protocol by showing how that professional accreditation process maps onto the institutional framework for assessment. 

1. Program or School Mission Statement and Goals 

The program’s mission is to state broad goals on what a program aims to provide its students. The program mission is centered on outcomes that are of value to the department/program. Goals articulated in a program mission should be consistent with the goals of the school, campus, and University. Use the program mission statement as the framework for assessing other components of the program. 

2. Learning Outcomes

Learning outcomes answer the question, “What will students know and be able to do when they graduate?” and should be clear, concise, and specific to the discipline. Consider using action words such as “identify,” “analyze,” and “explain,” when defining learning outcomes to clarify outcome results during the evaluation process. There are typically three to five learning outcomes for a major and two to three learning outcomes for a certificate or micro-credential.  

Here are a few examples of student learning outcomes:

  • Students will demonstrate a solid working knowledge of basic principles and vocabulary of micro and macroeconomics.
  • Students will be able to effectively communicate mathematical research knowledge and teach mathematics at a university level.
  • Students will be able to demonstrate mastery of clinical performance skills to provide safe, competent care.
3. Assessment Methods

In the Assessment Methods section of the Student Learning Outcomes Assessment Matrix, you are asked to answer the following five questions. 

1. What data source will be used to measure this learning outcome? 

2. How will the data source be assessed?

Assessment methods should be strategically selected to best assess the specific learning outcomes. Successful assessment plans utilize both direct and indirect methods of assessment. It is important not only to measure students’ perception of how much they have learned in each program, but to also identify measurements that will show if students are attaining the goals the program has set for its students: “How will the outcome be measured? Who will be assessed, when, and how often?”  Use (or adapt) assessment opportunities you already have in place such as tests, projects, surveys, and capstones instead of creating an entirely new system of measurement.

Direct Assessment/Direct Evidence provides direct evidence of student learning such as: course papers and assignments, performances, exhibits, licensure and professional exams, standardized tests like the GRE subject tests. 

Indirect Assessment yields indirect evidence of student learning or perception of student learning such as: student surveys, focus groups, exit interviews, job and graduate school placement, graduation, and retention rates, etc. 

Most PhD programs use qualitative assessments of PhD milestones, student publications and presentations, and students’ career outcomes to assess student learning outcomes. Milestones tend to be assessed with program-designed rubrics that reflect the disciplinary nature of the learning goals. The doctoral program committee or subset of program faculty usually conducts the assessment of each learning outcome once every three years by assessing and discussing the work (e.g., dissertations) of a subset of students.

3. What group/subgroup of students will be assessed?

Once an appropriate method of assessment is identified, the faculty should decide who will be assessed, i.e., what sample of students. In some cases, particularly when reviewing student papers, theses, or dissertations, it makes sense to only use a small sample of students such as 10%. In other cases when, for example, placement records of graduate students are used, 100% of the students might be assessed. 

4. Who will do the assessment?

Many programs choose to use a team of three faculty members who review a sample of student products to assess how well students are meeting the standard set by the faculty for specific learning outcomes. This approach need not be time consuming as only a sample of capstone papers, theses, and dissertations would need to be reviewed every three to five years.

5. When will the assessment be conducted (Year 1, 2, or 3 of assessment cycle)?

Student learning outcomes need only be assessed every three to five years. To keep assessment manageable, programs should determine an assessment timetable that results in meaningful data without causing undue burden to the faculty. A schedule that takes the entire plan into consideration can and should result in only a small-time commitment on the part of the program’s faculty or staff each year. 

4. Standards of Comparison

Standards are values set by individual programs that represent the expectation for a given measurable goal. Standards of comparison are determined by the program faculty to provide a benchmark for student achievement in a specific program. Standards link directly back to the specific learning outcome and not to a cumulative set of student achievement such as course grades or GPAs. Faculty consider questions such as, what does it mean for a student to demonstrate effective communication or critical thinking skills in the discipline? What is the standard to determine that this goal was achieved? Specifically, “How well should students be able to do on the assessment?” 

Here are a few examples of Standards of Comparison: 

  • 100% of evaluated drug log assignments score at or above 80%
     
  • 90% of papers demonstrate that the topic was researched, developed, and presented with a high degree of relevance in terms of policy implications and/or practice applications.
     
  • Within one year of graduation, more than 90% of alumni employed in a teaching or teaching related position or enrolled in graduate school; after three years, more than 85% of alumni indicate satisfaction with various elements of their total program and of required courses.
5. Interpretation of Results 

Results are analyzed using the criteria and standards set forth by the program faculty and answers the question, “What does the data show?”

Here are a few examples of what the data shows:

  • 75% of the class scored 80% or higher on the relevant portions of the exam. The mean was 84.6%. The median was 86%. 
     
  • Of the 32 students who graduated with a PhD from our department between 2018 and 2021, 16 (50%) have attained full-time, tenure-track positions. This figure surpasses that reported (49%) in the most recent study (2021-2022) of placement to tenure-track conducted by the [professional association.] 
     
  • EVALUATED SPRING 2023: NEXT DUE TO BE EVALUATED SPRING 2026. Eight exams—the totality of those presented since written comps essays were instituted as part of our graduate program reform several years ago—were evaluated. 100% were assessed as demonstrating at least Proficient knowledge; only one was judged to be Exceptional.
6. Use of Results/Action Plan 

The resulting data from the measurement of student learning outcomes is of little use unless the academic program has a strategic plan for using those results for program improvement. Individuals and/or committees responsible for using the data to implement strategies for program improvement are identified at the time the assessment plan is drafted and answer, “Who reviewed the finding? What changes were made after reviewing the results?” The action plan should address shortcomings, increase expectations, and refine methods with a specific action plan and timetable. 

Here are a few examples of use of results/action plan: 

  • The data collected and analyzed by the doctoral committee suggests that Ph.D. students need more opportunities to publish and present research at professional conferences. In response, the doctoral committee plans to take the following actions: 
    • Convene a faculty meeting in 20XX PY to discuss ways to encourage and mentor doctoral student authorship and presentations of research at conferences. 
    • Develop and distribute guidelines for doctoral advisors that include departmental expectations for doctoral student publications and presentations. 
    • Create more department-level research and writing groups that include doctoral students. 
    • Improve our distribution of information related to professional conferences, including posting upcoming CFPs on the departmental “doctoral student information” bulletin board. 
    • Request that the school faculty annual review process include a section on our notation for co-authoring with doctoral students. 
    • Make expectations for publishing and presenting clear at the annual doctoral orientation. 
       
  • Although we have attained consistent positive findings regarding achievement on this goal, other than overall quality of the program, we are not sure exactly what the specific sets of contributing factors are. Thus, the Program Director and the quality assurance committee will explore this question by beginning with a focus group session involving near-graduation students during early Spring, 20XX. Information generated by such assessment will help us to be targeted and nurture and capitalize on the contributing factors. 
     
  • After reviewing last year’s data, it is our plan to develop an instrument whereby upper-level courses are standardized in their goals and outcomes for students’ reading proficiency. As such, the learning outcomes for reading comprehension will be expressly integrated into all syllabi, as will a discussion of reading strategies and approaches. 

Resources 

This web content is intended to be a resource for those faculty members and administrators who are responsible for the process of assessing student learning in their units.