Chapter 3 - Evaluating teaching: guidelines and good practice

Evaluation of teaching involves collecting evidence, from various stakeholders, for the purpose of improving the effectiveness of the teaching-learning process. A successful evaluation generates outcomes that are valid, reliable and indicate directions and action for development.

These guidelines suggest five key questions to be addressed when considering the practical issues of evaluating teaching.

  1. What is the purpose of the evaluation?

    Good practice: make clear to all those involved, the purpose, or purposes, of the evaluation. The evaluation should centre on:

    - the quality of the educational provision (the product) - which could be the whole programme, a course (module), a class (lecture, seminar, laboratory, etc.)
    - the performance of the provider(s) - the academic staff, tutors, support staff, involved in the delivery of this programme/course/class
    - the experience of the learners as partners in the process - their experience of what is provided, and of the providers their motivation and approach to learning
    - a combination of these things - provided that the various purposes are made absolutely clear to those asked to make the evaluation.
  2. What is the key focus of the evaluation?

    Good practice: make clear to all those involved the key focus of an evaluation. If there is more than one area to focus on, organise these questions into clusters, identifying the focus of each cluster. Avoid too many focus areas as this will confuse the evaluation. Help the evaluator to concentrate their work on a maximum of three key questions and provide you with a constructive and reflective response for each one.

    For example, you might want to know about these areas, asking these questions:

    Area of practice to be considered Questions you could ask
    The learning experience Are the aims of the session and the learning outcomes clearly stated at the outset and met through the learning and teaching activities?
      Is the curriculum content meaningful relevant and pitched at the appropriate level for the course and learners’ existing knowledge?
      Are learners motivated and actively engaged in learning? Are they attentive and participating when required?
      Is the content presented in an effective and engaging way, employing a variety of methods?
    Assessment of learning Is the assessment method clear, transparent and valid?
      Are the assessment criteria accessible and at the correct RQF Level?
      Are the assessment criteria ‘constructively aligned’ with the learning and teaching activities and the intended learning outcomes?
      Is the quality of feedback appropriate and linked to improving learner performance?
    Curriculum Is the curriculum challenging enough, holding learners’ interest?
      How reasonable is the workload involved?
      Does the curriculum develop skills knowledge and experience relevant to the programme and individual professional development?
      Do learners take advantage of support and resources?
  3. Who will be asked to make the evaluation?

    An evaluation of teaching will normally be designed for learners as the primary participants in the learning and teaching experience. However, there is significant advantage in also seeking evaluation by others. Feedback from colleagues and other staff as well as from learners allows for learner triangulation of different perspectives that adds to the reliability and validity of the outcomes of the evaluation process. Such correlation provides insight into the level of harmony, or disharmony, of perceptions between the partners in the teaching-learning process. Internal as well as external peers can be invited to participate in the evaluation, to introduce a wider perspective on, for example, the academic standards of the educational provision.

    Good practice: correlate outcomes of evaluation from different viewpoints wherever possible.
  4. Who will see the outcomes of the evaluation; who will act upon them?

    Good practice: make clear, from the outset, to all involved in the evaluation process who will see the evaluation report, who will be responsible for taking action, and who will monitor the effectiveness of this action. Avoid undertaking an evaluation of matters where there is no realistic prospect of any action following the outcomes of evaluation.

    Before designing an evaluation it is necessary to define:

    - how, and by whom, the raw data will be processed to generate outcomes
    - to whom the outcomes will be reported, and in what detail
    - who bears responsibility for taking action in the light of the outcomes.

    Both the evaluators, and the evaluated, as well as the person(s) responsible for the management of the quality of the educational provision, must have access to the outcomes of the evaluation. This should include the Head of Department and Academic Lead or PDR Reviewer. Use the Annual Review of Teaching proformas to ensure that you provide an appropriate level of detail.
  5. What methods of evaluation are available?

    Good Practice: Once the purpose of evaluation, its focus, reporting and responsibility for action have been determined, decide the method of evaluation that best suits these criteria. There is always an advantage to using several methods of evaluation and correlating their outcomes.
    1. Questionnaire

      This familiar method of seeking feedback from learners and participants has the potential advantage of speed (in administration), anonymity (of response) and standardisation (for purposes of comparison between cohorts). The shortcomings can include poor response rate and validity of the outcomes, if the questionnaire is not designed with care (for purpose and focus), and if questionnaires are over-used (the effect of "questionnaire-fatigue").

      The Accelerate system gathers information from learners both during and after completion of each module. The University wants to make sure that we actively engage learners in providing feedback that lecturers can respond to and act on.

      You might also want to run a specific survey about an innovative or new learning activity, to help you evaluate the success or areas for further development needed, and you can do this using Accelerate.

      Points to consider:

      a) Who should design the questionnaire? The answer is determined by the purpose of the evaluation, and is, most commonly, the person(s) responsible for the delivery of the education under evaluation.

      It is good practice to seek the views of the intended evaluators of its suitability for the purpose. 

      b) Should the questions be designed for response on a rating scale (e.g. a positive statement with response on a scale of 'agree' to 'disagree')? Rating scales lend themselves to rapid processing and ease of comparability across cohorts/years, especially when using online survey tools, but they limit the range of response.

      c) Should the questions be designed for freeform response (e.g. "what did you find most challenging about ...")? Freeform responses allow a more subtle range of responses and for raising issues beyond those set out in the questionnaire. However they take longer to complete, longer to process and much longer to report. A good compromise is a questionnaire that is mainly of rating-scale format (for speed and consistency) with some opportunities for freeform response.

      It is good practice for the processing and reporting to be done by someone not closely involved with the subject of the evaluation.

      d) How long should the questionnaire be? In order that the purpose and focus remains clear it is good practice to keep a questionnaire short - about 10 questions would be about right for a rating-scale format, but much less if all the questions allow freeform response.

      e) When should it be administered? The answer to this depends entirely on the purpose. For example, evaluation after the end of a module gives a more complete picture, but too late for that cohort to benefit from the information - evaluation part-way through the module, or after individual classes, gives an incomplete picture, but would enable some adjustment of the remainder of the module to benefit that cohort. The purpose and focus also determine the best frequency of administration, but it is unwise to overload to the extent that questionnaire fatigue sets in.

      It is good practice for a department to have a planned schedule of evaluation, with higher frequency of evaluation where there is cause for concern, and lower frequency where evaluation gives stable positive outcomes.
    2. Structured group interview (nominal group technique)

      This is a meeting with learners or participants where they are asked to give their views about a programme, course or class. The meeting must be planned and structured carefully to generate constructive debate, and so that learners/ participants feel free to express their views without personal risk. Typically, learners are asked to work in small groups to reflect upon positive and negative features of the educational provision (e.g. programme, course or module), its delivery, and their own performance and experience. A spokesperson from each group is asked to relay the considered views of the group to the meeting. The role of the member of staff leading the meeting is to compile a summary of such views, to validate them at the meeting, and, later, to produce a short report of the main outcomes. It is an advantage for this person to be someone from outside of the department or teaching team to support anonymity and to provide a safe environment for learners to express their views honestly. Learners who take part should be provided with a copy of the written report.

      The structured group interview allows learners to have greater freedom of expression than a questionnaire, and more opportunity to make constructive suggestions for improvement. It typically requires a meeting of about an hour, but the processing is done during that hour and the time needed for producing a report is short.

      It is good practice for a group interview to be led by an experienced leader who is not involved in the delivery of the educational provision that is being evaluated, and preferably not in the same department.
    3. Student-Staff Liaison Committee (SSLC)

      Every academic department has a Student/ Staff Liaison Committee (SSLC). While this is neither the right venue, nor necessarily the right group, to conduct an evaluation with, it is an opportunity for learners and staff as partners in the teaching/learning process to;

      - identify priorities for scheduling evaluations 
      - propose particular purposes and the focus of an evaluation 
      - comment on the outcomes of evaluations, their validity and reliability
      - explore next steps and options for action.

      It is good practice for every meeting of a SSLC to include a default agenda item on evaluation of learning and teaching, so that it is a legitimate topic to raise at any meeting. Points raised at SSLC can provide useful correlation with outcomes of evaluations made by other methods (see also the Code of Practice for SSLCs).
    4. Self-evaluation and Peer evaluation

      A simple but effective form of self-evaluation by a course/module team is for them to undertake the same form of evaluation undertaken by the learners, i.e. to complete the same questionnaire, or to conduct a self-evaluation using the same format of a structured group interview. The degree of harmony between the staff responses and the outcomes of the learners' evaluation is a good indicator of staff awareness of learner perceptions. Items of disharmony suggest that staff understanding of learners' needs and concerns needs to be addressed before attending to issues of detail that arise from the evaluation.

      Regular peer evaluation of the teaching/learning process to support self-evaluation is a valuable activity. The University requires all those involved in teaching and or supporting learners to complete an Annual Review of Teaching (ARTS) each year. Details of the ARTS scheme can be found here.

      A colleague, focusing on the process that is taking place, adds a dimension of evaluation that may escape the member of staff and the learners, who are generally too busy with the business of teaching and learning to observe the process itself. However, casual 'dropping-in' on a class is not the best approach to peer evaluation.

      Further examples of observation checklists and features to look for in the teaching/learning process are available from the Academic Development Team at academicdevelopment@exeter.ac.uk. Both self-evaluation and peer evaluation can align with, and add value to, other forms of evaluation of teaching.

      It is good practice for peer evaluation to be a planned and structured process, involving the separate stages of

      - briefing, to set context and identify aims and learning outcomes of the process that is to be observed
      observation, with a checklist of key features to look for, agreed at the briefing
      de-briefing, to give feedback on the observation and to compare the perceptions of the observer and the observed (see the Annual Review of Teaching scheme.

Last updated September 2022

Back to top