Measures of student learning can be either direct or indirect. Details and examples of both types of measures are given in turn given below. When including measures of student learning in, for example, a teaching portfolio or an appointment/promotion case, candidates should include an analysis of the approach taken and results achieved, as well as a self-reflection on how the outcomes have informed their educational approach.
Direct measures of student learning capture the knowledge/skills/attitudes of the student cohort, enabling evaluation of student performance, either against a defined benchmark or through changes over time. While ‘direct measures’ can provide robust evidence of teaching achievement within particular courses or programmes, they are typically resource intensive, requiring time and expertise to design and collect.
While direct measures provide explicit evidence of student learning, indirect measures provide evidence that suggests or implies that student learning has taken place. Typically relating to either institutional indicators of student progression or to the perspectives of students and other stakeholders, most indirect measures are relatively straightforward to collect in a standardised form that can enable comparisons between cohorts.
While ‘direct measures’ can provide robust evidence of teaching achievement within particular courses or programmes, they are typically resource intensive, requiring time and expertise to design and collect. Such measures are therefore not routinely used within promotion cases. Where included, however, it is important for candidates to contextualise the data by describing the design and goals of the activity. As summarised below, direct measures of student learning that could be presented as part of an academic promotion case tend to fall into two categories: (i) direct measures of learning over time; and (ii) direct measures of learning at a single point in time.
The direct measures of learning over time likely to be most appropriate for inclusion in a promotion case are those involving pre/post testing of students, for example, on the basis of their conceptual understanding. One well-documented example of such pre/post testing is from the Massachusetts Institute of Technology (MIT), where a new active learning approach was adopted within an electromagnetics course with the aim of improving students’ conceptual understanding and reducing failure rates. Conceptual questions from standardised tests were administered to students both before and after the new course, and the results were compared to control group data from students studying under the previous, more traditional, course delivery style. The survey outcomes demonstrated that the new course delivered significantly improved conceptual understanding among students (Dori and Belcher 2005). The survey questionnaire used in this example can be accessed from Dori et al. (2007).
Concept tests, such as the Force Concept Inventory (Hestenes and Halloun, 1995) – available from Mazur (1997) – are widely used in engineering and physics schools across the world to evaluate students’ conceptual understanding.
An alternative direct measure of student learning is the student learning journal, in which students are asked to reflect on the course and their learning on a weekly basis. An approach to designing and evaluating student learning journals is provided in Shiel and Jones (2003).
While direct measures provide explicit evidence of student learning, indirect measures provide evidence that suggests or implies that student learning has taken place. Most indirect measures capture evidence at a single point in time and therefore do not necessarily offer insight into the ‘value added’ by the education or intervention. However, they have the advantage of being relatively straightforward to collect in a standardised form that can enable comparisons across and between cohorts.
Indirect measures typically relate either to institutional measures of student progression (e.g. pass rates, attrition rates) or to the perspectives of students and other stakeholders (e.g. unsolicited student feedback, student evaluation scores, employer feedback). Examples of indirect measures of students learning are listed below. Where possible, links to relevant measurement tools are provided.
Institutional student evaluation questionnaires are widely used by universities across the world as key indicators of academic teaching achievement. However, many such questionnaires have been designed ‘in house’ and some are reported to “lack any evidence of reliability or validity, include variables known not to be linked to student performance, and do not distinguish well or consistently between teachers and courses” (Gibbs, 2014). Summarised below are details of two alternative and highly-regarded survey instruments that could be used by candidates to collect student evaluations in relation to a specific programme, course or activity:
Self-efficacy, or a student’s self-belief in their own abilities, has been shown to be a strong predictor of student learning and motivation (Zimmerman, 2000). Pre/post survey data that demonstrate improvements in student self-efficacy can be used within a promotion case to demonstrate, for example, the impact of a course or new pedagogy. A generic self-efficacy questionnaire (the Motivated Strategies for Learning Questionnaire) is available from Pintrich and DeGroot (1990).
Targeted self-efficacy questionnaires are also available which often focus on specific skills and attitudes, such as entrepreneurship (Lucas, 2014), or within specific disciplines, such as engineering design (Carberry et al., 2010).
As a complement to student evaluation survey data, solicited or unsolicited feedback from students/graduates – for example an email from a student describing the positive impact on their learning, progress and/or engagement made by the candidate – can be used to support the teaching element of promotion cases.
Indirect evidence of student learning can also include the achievements of students and graduates. Although, in most cases, it is very difficult to attribute such achievements to the learning opportunities and/or support provided by a particular academic, some exceptions may exist. For example, a promotion candidate could include details of the number of student teams from an entrepreneurship course who have since established a successful startup business.
Other ‘indirect measures’ can be used to demonstrate both programme- and institutional-level impact in teaching and learning. Examples could include:
Career Framework for University Teaching, 2024