4

Improving university teaching through student feedback: a critical investigation

Torgny Roxå and Katarina Mårtensson

Abstract:

This chapter describes the use of student evaluations in the Swedish context in general, and in one university faculty in particular. The case study illustrates a systematic, well-organised way to collect and collate data from student evaluations on teaching. The study also includes a systematic process to interpret data, where lecturers, students and programme leaders join together. The crucial features of this system are described and analysed. One important conclusion is that, unless the data collected through student evaluations is interpreted and, most importantly, acted upon, the whole process of evaluation runs the risk of becoming a meaningless burden to all involved. Following from this, implications for leadership are also discussed.

Key words

strategic educational development

Swedish tertiary education

student evaluations

leadership

course experience questionnaire

Abbreviations:

CEQ

Course Experience Questionnaire (a student feedback questionnaire used in the case study)

HSV

Högskoleverket (Swedish National Agency for Higher Education)

KTH

Kungliga Tekniska Högskolan (KTH Royal Institute of Technology)

LTH

Lunds Tekniska Högskola (Lund University Faculty of Engineering)

SOU

Statens Offentliga Utredningar (Swedish Government official report)

Introduction

Sweden has a long tradition of student influence in higher education, dating back to the 1960s. Sweden has since then had powerful student unions as well as student representatives on committees and boards within universities. The practice of student feedback through course1 evaluations has gradually been established over the last decade. No doubt, this has fulfilled an array of important functions: allowing students to express their opinions about their education, acting as ‘fire alarms’ when courses have been run badly, providing data sources for course improvements, and also being used as sources for lecturer promotions. This chapter acknowledges all that; however, it moves beyond the importance of collecting feedback and problematises and explores some critical features of student evaluations as a mechanism for guiding development. This will be done in the form of a detailed case study, in which a faculty introduced a large-scale system of collecting student feedback through course evaluations with the purpose of improving teaching and learning. The results are discussed mainly in terms of the importance of student and lecturer engagement, making use of collected data and leadership.

Background

Student evaluations are by no means straightforward to use as tools for development. They need to be interpreted in the light of the context in which they take place and in terms of different possible biases that might occur. Gender might, for instance, play a role, as reported by Centra and Gaubatz (2000) and Sprague and Massoni (2005) – students may favour lecturers of the same gender as themselves. Kwan (1999) and Liaw and Goh (2003) claim that student judgements are affected by class size and study level. It has also been shown that students’ perceptions of what constitutes good (or bad) teaching influence the way they evaluate their teaching and learning experience (Prosser and Trigwell, 1999; Ramsden, 2005). This chapter does not elaborate on the issue of student evaluations per se, but rather on the use of student evaluations as a foundation for educational development.

The Swedish Higher Education Act 2000 requires that all students who have completed a course (as part of a programme or as an independent course) should be given the opportunity to express their opinions in a course evaluation organised by the institution.

Higher education institutions shall enable students who are participating in or have completed a course to express their experiences of and views on the course through a course evaluation to be organised by the higher education institution. The higher education institution shall collate the course evaluations and provide information about their results and any actions prompted by the course evaluations. The results shall be made available to the students. (Swedish National Agency for Higher Education, 2000, Chapter 1, Section 14)

The decision to make course evaluations mandatory for institutions came after a long debate in Swedish higher education. The origins of this debate can be traced back to the 1960s. The debate first culminated in the 1980s, when student unions throughout the country demanded course evaluations as a tool for student influence and educational development. This resulted in a national inquiry into higher education, which in 1992 suggested that student participation in processes aimed at establishing and developing quality in university courses should become an organic part of Swedish higher education (SOU,2 1992: 1).

As a consequence, evaluations have gradually become a part of the Swedish higher education system, over the following ten years. Today, student evaluations are a widespread phenomenon in the Swedish higher education sector. However, what becomes apparent in the quotation above is that there is no regulation on how the evaluations should be designed, what purpose they should fulfil, nor how they are going to be used; except that the results should be made available to the students. The situation today, therefore, includes a multitude of ways to conduct course evaluations, as well as a multitude of ideas on what questions should be asked of the students, and how the resulting data is to be stored, distributed, analysed or used. In short, the Swedish higher education system is crowded with data generated by students during processes that they, most likely, thought were for the development of university teaching and student learning. But there is no clear picture of how all this data (and effort) is utilised for improvement purposes.

This ‘scattered’ picture is reinforced by a collection of examples on how student evaluations are carried out in Sweden published by The Swedish National Agency for Higher Education3 (Heldt-Cassel and Palestro, 2004). This publication incorporates description of approaches to conducting student evaluations in eleven Swedish higher education institutions. In the summary, Heldt-Cassel and Palestro conclude that ‘as the contributions in this book illustrate, the views on what a course evaluation should look like or how it should be designed vary’ (ibid., p. 9).4 The picture of course evaluations in Sweden is rather scattered. Different institutions, faculties and departments conduct student evaluations in their own ways. It is, therefore, hard to comment on the effectiveness of these examples. In the introduction, Heldt-Cassel and Palestro state that the purpose of course evaluations is to enhance the quality of higher education through student feedback. However, the contributions in the anthology do not reach the necessary depth where trustworthy claims about such use could be made. They merely describe how evaluations are intended to work, not what actually goes on. In the authors’ own institution, a follow-up study on the use of student evaluations (three years after the 2000 Higher Education Act regulation came into effect) showed that student evaluations were introduced in most courses, but the use of the results was patchy as well as the communication to the students about actions taken as a result of the course evaluations (Lund University, 2004). This corresponds to what was reported a decade earlier by the national SOU Report referred to above, in its foreword stating that ‘the use of course evaluations is rarely working optimally. Above all, there are shortcomings in terms of communicating the results to students’ (SOU, 1992: 7).

In order to expand on this, as a starting point we use two specific Swedish published examples because of their somewhat critical elaboration. These two examples discuss the relationship between evaluations and development.

At the Royal Institute of Technology (KTH), a prestigious Swedish higher education institution, a course analysis consisted of quantitative data: number of students registered; completion rate; students’ views on the course; and an analysis made by the lecturer based on the two first sources (Edström, 2008). According to Edström, there was widespread dissatisfaction with this system. The student union complained about the fact that the results had not been further utilised. The National Agency for Higher Education criticised KTH for a lack of consistency in the system and for not informing the students about the results. Many lecturers claimed that the compulsory evaluations were worthless, even ‘a venom’ (p. 96). Edström’s empirical investigation showed that the data collected from students was mainly focused on the students’ impressions of the lecturer/s; the lecturers were rated only according to hidden criteria. When the teaching came into view, it only focused on the surface, teaching per se, and revealed almost no relation to student learning. Edström’s conclusion was that the student evaluations were ‘teaching- and lecturer-focused. As course development is not in the foreground, evaluations merely have a “fire alarm” function’ (ibid.: 95). In summary, Edström could see almost no relation between the system for course evaluations and the development of teaching and student learning. The course evaluations were mostly seen as a meaningless burden (ibid.: 96).

The example at KTH is not unique in Swedish higher education. Student evaluations are mandatory: the National Agency for Higher Education conducts audits to ensure their existence and, occasionally, criticises institutions for doing the job poorly, and the students ‘guard’ them as a potential tool for student influence. However, course evaluations too often appear not to ‘do the job’, namely enabling the development of teaching and student learning.

The second example takes a wider perspective on evaluation as a tool for development of teaching and learning. Its locus is the Faculty of Medicine at Umeå University, another well-established Swedish higher education institution. Fjellström (2008) described how evaluations conducted in the institution led to an extensive dialogue with a number of stakeholders. The process led to an inquiry where those responsible for the medical programme viewed evaluation as ‘a resource of interesting and challenging information offering a platform for exchange with engaged stakeholders.’ (ibid.: 104). Fjellström made it clear that improvement was at the heart of the process; the results from the evaluation process were both welcomed and used for developmental purposes.

Further, Fjellström (2008) criticised the type of standardised process of evaluation in higher education monitored by the national agency, which threatens to become ‘ritualised window-dressing’ (similar to students using a surface approach to their studies). Instead, she argued for a more contextualised evaluation where stakeholders formulated their inquiry focusing more on enhancement of learning for medical practice.

It might appear hard to see how Fjellström’s vision could be transformed into a large-scale model fitting into modern higher education. Nevertheless, her suggestions might be worth considering, if the alternative is a system which is regarded as a meaningless burden, as in the case of KTH. The critical features of the Umeå example are contextualisation, stakeholder-ownership, dialogue, and inquiry for enlightenment.

In the following section, the authors will investigate the concept of course evaluation in greater depth, through a large-scale and contextualised case study, which is aimed to stimulate a critical conversation about student learning.

Case study: student feedback at the Faculty of Engineering, Lund University

Lund University Faculty of Engineering (LTH) is a research-intensive faculty within one of the oldest higher education institutions in Sweden (Lund University was founded in 1666). LTH has 8,000 undergraduate students (LTH, 2009), 460 doctoral students (of whom most teach 20 per cent of their working time), and 525 lecturers. A majority of the lecturers are also active researchers, where most of this research is funded by external grants won in competition with other researchers. In the overall budget of the faculty, research funding is twice the size of the budget for undergraduate teaching. Almost all of the teaching is organised into five-year programmes, and the students are awarded Master’s degrees. The basic organising principle is that programme boards are responsible for the content of, and teaching quality within these programmes. The lecturers, however, are all employed by departments, in turn composed of one or more disciplinary communities. The programme boards, thus, buy courses from the departments and compile these courses into programmes.

LTH has a long tradition of a well-organised and active student union, not only in matters concerning education and its quality. The faculty also prides itself on systematically enhancing student learning. It organises pedagogical courses for lecturers (Roxå, 2005). In Sweden ten weeks of teacher training is mandatory for everyone seeking a tenured academic position in a university (Lindberg-Sand and Sonesson, 2008). The efforts to raise the quality of teaching also include a reward system focused on the Scholarship of Teaching and Learning (Olsson and Roxå, 2008), and since 2003, a bi-annual campus conference on teaching and learning, including peer-reviewed papers (Tempte, 2003). As a complement to these activities the faculty also uses an elaborate system of collecting student feedback on teaching and support of student learning through course evaluations. This system is the focus of the following section, especially its relation to the development of quality in teaching and student learning.

Course evaluations at LTH

The purpose of student feedback on teaching and course evaluations at LTH is explicitly formulated in a policy document:

This policy describing the system of evaluation of undergraduate education at LTH shall contribute to a process where the quality of teaching is consciously and systematically enhanced.5 (Warfvinge, 2003)

Elsewhere the purpose of evaluations is described thus:

It [the system of course evaluations] is designed to promote an intensified, informed pedagogical discussion among lecturers leading to innovation, improved teaching, and student learning. (Roxå et al., 2007)

To achieve this, student feedback at LTH is collected in two ways.

1. For operational purposes. This refers to any feedback a lecturer can organise throughout a course in order to gain a better insight on his or her students’ learning so that teaching can be immediately adjusted accordingly. It occurs during the course and is in other contexts often called formative evaluation.

2. For reporting purposes. This refers to data collected by the end of the course in order to produce a document describing the quality of the course and is in other contexts often referred to as summative evaluation. The purpose of this document, which is explicitly stated in the policy, is to support the quality enhancing dialogue between the programme boards, the departments, and the students. (Warfvinge, 2003)

The terms operational and reporting were chosen by this faculty instead of formative and summative to emphasise the use of evaluations. Operational connotes doing something, while reporting is associated with documenting data to be utilised for information purposes.

Operational evaluation is any feedback that a lecturer gains in order to enhance the dialogue with the students, with the purpose of enhancing student learning. More formally, this is often close to Classroom Assessment (Angelo and Cross, 1993). It is frequently directed towards the students’ understanding of the material they are supposed to learn or the context (which might support or hinder learning), rather than what the students think about the teaching or the lecturer. Operational evaluations support the lecturer–student dialogue during the course and thereby are very close to what constitutes good teaching. At LTH, it is the responsibility of the lecturers to organise operational evaluations during courses (Warfvinge, 2003), and this may be in whatever format they find suitable. It can be quizzes in the classroom, short diagnostic tests, or meeting with student representatives. The policy does not state how it should be conducted; only that it ought to be conducted, and that the departments have to check that it was carried out.

Reporting evaluation is much more formalised within the faculty. Its purpose is to produce documentation about a course, once it is finished, which allows programme boards, heads of departments, the Dean and other external stakeholders to participate in a conversation aimed at development. LTH utilises the Course Experience Questionnaire, CEQ (Ramsden, 2005) at the end of courses with more than 30 students; and this applies to a majority of courses. The questionnaire clearly supports a focus on student learning. It has 25 items where the students rate to what extent they experienced certain features known to support student learning during the course. The items may, for example, include: ‘I got valuable comments from the lecturers during this course’; ‘I usually had a clear idea of where I was going and what was expected of me in this course’ and similar. There are also opportunities for students to add comments in free text.

The questionnaire focuses on five areas of the teaching process which have been shown to relate to quality in student learning (Ramsden, 2005): appropriate workload (do students experience the workload as manageable?); appropriate assessment (do students experience the examination as supporting understanding?); generic skills (do students experience that the development of generic skills has been supported?); good teaching (do students experience support and encouragement from lecturers?); and clear goals (do students experience that the lecturers make an effort to help them understand what they are supposed to learn?).

The process of reporting evaluations at LTH runs in six steps:

1. Students fill in the form (paper or web-based) and add comments in free text.

2. The computer system transforms the data into a ‘working report’ (including all answers from students and the overall results from the examination).

3. The data in the working report is discussed at a mandatory meeting between the responsible lecturer, student representatives and a director of studies responsible for the whole programme of which the course is a part.

4. The lecturer, the students and the director of studies independently write short summaries of or comments on the discussion.

5. Statistically processed data and the comments from the discussion make up the ‘final report’.

6. The final report is then published on the faculty intranet and sent via e-mail to all students who took part in the course.

(During 2009 it has been added to the policy that programme boards shall include, in their annual reports, information on data from, and the use of, student evaluations.) The policy document clearly emphasises that this system aims at supporting critical and informed discussions on the development of teaching and learning within the faculty.

By 2009, almost 100,000 questionnaires had been collected and stored in a database accessible to lecturers, students and others within LTH. Course evaluation is discussed in teacher training courses within the faculty. It has also been discussed at the campus conference on teaching and learning (see, for example, Borell et al., 2008; Sparr and Deppert, 2004). The student union is informed repeatedly about the purpose of course evaluations and promotes the system among its many branches.

In summary, LTH has successfully implemented and managed an elaborate system of collecting student feedback on courses. Moreover, the system is aligned with other educational development efforts, such as teacher training courses, a reward system for good teaching, and a bi-annual peer-reviewed campus conference on teaching and learning. LTH also has a faculty leadership devoted to the improvement of teaching and student learning.

Making strategic use of student evaluations

The purpose of course evaluations at LTH is, as stated in the institution’s policy, to enhance teaching quality and student learning through the establishment of an intensive and informed discussion within the faculty. Are there signs of this happening?

Students

The students’ confidence in the system is supported in an independent external survey (Lund University, 2005). Despite this confidence, it is worth noting that the response rate is diminishing, especially when web-based questionnaires are used. On the other hand, a local investigation (Borell et al., 2008) showed that even if the response rate goes down, the comments given as free texts are longer, from 50 characters (paper) to 100 characters (web). There is no correlation between long and short free-text comments made by students and their overall satisfaction with a course (ibid.). The local data also indicates that lecturers’ positive engagement with the course evaluations correlates with the students’ response rate (ibid.)

An interesting pattern emerged when the entire material in the database was statistically analysed. It revealed that the CEQ results appear to vary significantly in relation to which programme the students are following. It is possible that students within a programme develop a programme specific study culture, which, in turn, influences their experience of teaching, as measured in student evaluations (Roxå and Modig, 2009).

Lecturers

Since the overall purpose of the evaluation system is to enhance teaching and student learning, it is important to also look at the academic voice. What are lecturer perceptions of the course evaluation system at LTH? In a study conducted by four academics at LTH (Björnsson et al., 2009), the authors claimed that the purpose of the system has not been communicated, or at least not received and integrated by a large section of lecturers within the faculty. This claim was supported by the fact that in 50 per cent of the final reports, the lecturer responsible for a course did not add any comments. Björnsson et al. (2009) also referred to another group of lecturers who criticised the system for not being sensitive to disciplinary differences and for having a hidden agenda of controlling lecturers at the time when the system was implemented (Sparr and Deppert, 2004).

As noted above, lecturers’ positive engagement with course evaluations appears to influence the students’ response rate, which also relates to the length of comments made by the directors of study. Overall, these patterns indicate that the lecturers’ and the directors of study’s attitudes towards course evaluations affect students’ response rates. LTH lecturers’ attitudes towards course evaluations in general – and the CEQ in particular – vary. In a seminar specifically addressing CEQ, 21 out of 28 senior lecturers were positive and seven were negative. Since participation in the seminar was voluntary (and hence only interested lecturers participated), the relation between positive and negative attitudes cannot be generalised to LTH as a whole.

A relatively unexplored area concerns how lecturers react emotionally to receiving student feedback. In Gustafsson (2009), a lecturer describes the frustration experienced when getting negative course evaluation. After a process of carefully examining the data, Gustafsson concluded that, when interpreting course evaluations, other sources, such as exam results, course objectives and similar, also need to be taken into consideration.

Two pilot studies at LTH also explored emotional responses to student feedback. In the first pilot study (Svensson et al., 2009), a small number of lecturers at LTH were interviewed. The results showed that the interviewed lecturers were positive towards student feedback. However, at the same time, the lecturers revealed emotional tension when receiving the feedback provided by the system. Further, there was a small difference between female lecturers, who received more comments on their personal approaches, and their male colleagues, who received more feedback on what they did during their teaching. These results were consistent with previous findings in the literature (Centra and Gaubatz, 2000; Nasser and Fresko, 2002; Santhanam and Hicks, 2002; Sprague and Massoni, 2005). In the second pilot study (Bergström, unpublished), 14 LTH lecturers were interviewed in depth concerning their reactions towards student feedback and the use of such material. Preliminary findings showed that they did not use the statistical material provided to them at all. Instead, all of them described interest in the free-text comments, but most of all, they looked for immediate reactions from the students during class. None of these lecturers described any formalised use of student feedback within their departments or disciplinary communities, even though almost all of them had informal conversations in which these matters were discussed. Another observation from the interviews was the absence of leadership. The lecturers expressed how they individually coped with the results of student evaluations. This lack of a supportive leadership or collegial culture was confirmed in other studies in Swedish higher education (Swedish National Agency for Higher Education, 2008).

In Bergström’s (unpublished) study, the individual lecturers appeared sceptical towards formalised student evaluation, such as the CEQ. Partly, this was because it does not take the specific disciplinary conditions into account, and the statistics do not match their experience of the course. They did, however, appreciate the free-text comments. Looking at the operational evaluation, the approach of lecturers was different, as many of them incorporated many of the activities which constitute operational evaluation automatically. To them it constituted good teaching. It is still unclear how the emotional tension experienced by lecturers influences their attitudes towards student evaluations, or their interpretation of the data produced.

The faculty leadership

Currently, there are very few examples of institutional research at LTH where heads of departments or the Dean have used results from student feedback in order to boost a developmental process. There have been no investigations concerning the unofficial use of student feedback; to what extent this type of material is fed into informal backstage conversations about teaching and learning. As for the programme boards this has partly been discussed earlier, with reference to how the programme directors make comments in the final report. The use of data from student evaluations seems to vary tremendously between programmes, as it does between individual lecturers. Some programme boards are active in their use of student feedback in the negotiation about teaching with departments, and with individual lecturers; others show no such activity. Recently, on request by the faculty management, the programme boards have commented, in their annual reports, on the data produced during the reporting evaluation, as well as on measures taken because of this data.

External stakeholders

The National Agency for Higher Education published a quality-review of all civil engineering programmes in Sweden in 2006 and concluded about LTH that: ‘CEQ is a good basis for course evaluation, but needs to be developed. Quality assurance is systematic and works well at all levels of the organization’ (Swedish National Agency for Higher Education, 2006).

In summary, the authors’ observations from LTH show that the institution has developed an elaborate system of student course evaluation. However, the authors also note that the practice of informed discussion of teaching and learning aimed at improvement is developed only in some contexts within LTH.

Discussion

This section highlights some of the challenges faced when utilising the LTH student evaluation system. The LTH student evaluation system is rather well developed. It employs an international, research-based questionnaire, it has built-in components of dialogue between lecturers, students, and directors of study and the results are made publicly available for stakeholder scrutiny. Further, it is supported by other academic development activities, such as mandatory teacher training courses.

Is it only working in theory? The answer, in the light of the observations presented above, is both yes and no. The system generates data and the student feedback is made available. What we have found is a clear variation in the way lecturers and programme boards across LTH utilise this information. Some lecturers and programme boards are actively using the information while others are not.

Previous research has shown that lecturers’ use of student feedback varies with their conception of teaching and learning. Hendry et al. (2007) researched a sample of 123 university lecturers, focusing on the relationship between these lecturers’ conceptions of teaching and their use of student feedback. Their results showed that lecturers ‘strong on conceptual-change, student-focused (CCSF) approach are responsive to feedback and positive about strategies for improving their teaching’. They therefore recommended teacher training courses for all lecturers in order to improve the use of student feedback.

Variation in the use of student feedback at LTH might be explained by the variation in lecturers’ conceptions of teaching. However, Hendry et al.’s (2007) recommendation of widespread education of the lecturers does not explain the LTH case, since such a system is already in place and still there is a vast variation in the use of student feedback. An explanation applicable to the LTH case is provided by Roxå and Mårtensson (2009), who have shown how individual lecturers discuss teaching with colleagues, but how these colleagues are carefully chosen and talked to in private in so-called significant networks. These networks, it is argued, are the locus of construction and maintenance of lecturers’ conception of teaching and learning. This points towards an approach to academic development which moves beyond training of individuals and enters a more ‘culture-specific’ approach aligned with the research on teaching and learning regimes (Trowler, 2009).

Another approach to enhancing the use of student feedback may be to require lecturers to report on how they respond to student feedback. This would place the students’ experiences at the centre of the development of teaching and learning. However, two counter-arguments can be used against such a development. Firstly, there is no guarantee that what lecturers would report actually would be the truth. More likely they would, if they have a more transmitting-like conception of teaching, adopt a somewhat surface approach to the student feedback and to the instruction to report on the use of it. Secondly, students’ perceptions of teaching during a course vary depending on the approach to learning they use. Students using a deep approach demonstrate ‘a more sophisticated understanding of the learning opportunities offered to them than did students with surface approaches’ (Campbell et al., 2001). Therefore, teachers using an information transmission approach to teaching would likely interpret positive feedback from surface-approach students as an encouragement rather than a reason for critical reflection.

In addition to these arguments, a further managerial effort to demand the use of student feedback, especially if it is done instrumentally, might possibly threaten the relationship between lecturers and management but, most importantly, also between lecturers and students. In a thoughtful contribution to the discussion, Singh (2002) reflects on student evaluation of teaching in terms of student ratings of teaching. Drawing on Habermas, Singh argues that an unreflective use of student feedback contributes to a perspective where the students become education ‘consumers’, and the lecturers become education ‘providers’. ‘So, instead of ticking multiple-choice boxes, our students could attempt to answer some open-ended questions which would encourage them to reflect on their educational experience, and consider their role and responsibility in it’ (Singh, 2002: 697). Again, the argument is that what is needed is complex and context-specific material to be utilised for enhancing the dialogue between the key players in education: students and lecturers. It reflects Fjellström’s argument (Background section above): ‘From being a somewhat threatening instrument of appraisal and grading, the living process of participation, dialogue and deliberation gradually opened up a view where evaluation was regarded as a resource of interesting and challenging information’ (Fjellström, 2008).

In summary, the case of LTH mirrors other accounts of the use of student evaluations. The key features of this discussion and lessons learned include the following:

Using the data

One might produce data in higher education mirroring students’ experiences of teaching, but the key issue is whether the data is used or not. A well-designed system for course evaluation might look impressive. But if it only fulfils the purpose of collecting feedback, it is merely a waste of lecturers’ and students’ time. The interpretation of data also needs careful consideration and integration with other sources of information in order to provide the starting point of any development activities, not least because of the possible existence of student study cultures, mentioned above. It is thus possible that the results from student evaluations to some extent are influenced by these cultures. Careful and critical analysis of the data is therefore necessary.

Influencing lecturers

Increasing the use of student feedback appears to be more a matter of having an impact on lecturers than designing questionnaires or statistical procedures. Academics are trained in critical thinking; they easily ‘tear to pieces’ any method for collection of feedback if they do not believe in it, or see the sense in doing it. A logical consequence is that measures that have the potential to influence lecturers’ thinking must be included from the beginning in the design of any system of collecting student feedback.

Different ways of influencing university lecturers’ thinking about teaching and learning have been explored for many years and examples in the literature are plentiful. Most of them, however, target the lecturer as an individual. The authors would like to add a cultural perspective, looking at the individual in his/her own context. Lecturers relate most of all to their own disciplines (Henkel, 2005). Moreover, they go through a long and intense period of socialisation before they are acknowledged as full members of a disciplinary community. This, most likely, has profound effects on the individual’s professional identity, including the ways of thinking. Therefore, staff development should be stretched beyond the individual and into the realm of academic cultures (for an exploration of academic cultures and change, see Trowler, 2009). If a young lecturer enters a disciplinary community where student feedback is considered a resource for improvement, he or she most likely will embrace it too. If the opposite is the case, the academic’s attitude will be influenced accordingly. (It should, however, be noted that these processes of influence are not only one-way.)

Leadership

A lot of the changes in Swedish higher education over recent years have put considerable pressure on individual lecturers to develop their courses and their teaching (Swedish National Agency for Higher Education, 2008). Staff development activities have also, to a large extent, focused on supporting the individual lecturer. Course evaluations may add to this focus, laying the full weight of expected improvements on the individual. As the case study presented in this chapter illustrates, there is also a need for leadership in the pursuit of development of teaching and learning at all levels of the institution. The Faculty of Engineering management installed a system partly due to external pressure as a result of the Swedish Higher Education Act of 2000. By doing so, they ran the risk of focusing on quality assurance only, rather than quality enhancement. As it appears to date, the vast amount of collected data has been little used by the faculty leadership. The role of leadership in relation to student feedback on teaching is therefore still unresolved at LTH. Data needs to be utilised by leaders within an institution, and this follow-up must result in action, or leaders must communicate why feedback has not resulted in action. However, the authors would like to warn against the temptation for management to utilise student feedback to put unreasonable pressure on lecturers concerning their performance. This may backfire, since it may ‘instrumentalise’ the relation between students and lecturers, as expressed by Singh (2002). These authors recommend careful consideration of disciplinary and cultural differences, and also being explicit about the purpose of student feedback and how to utilise the feedback.

In conclusion, the authors fully acknowledge the value of student feedback and course evaluations as one important tool for development of teaching and learning in higher education. However, it is not easily utilised for developmental purposes. The authors have elaborated the complexity and some of the challenges and critical features of such processes, so that student feedback does not become an instrumentalist exercise or, indeed, a waste of time.

Acknowledgement

We are extremely grateful to Mattias Alveteg, LTH, for invaluable critical comments on the content of this chapter.

References

Angelo, T., Cross, P. Classroom Assessment Techniques. San Francisco: Jossey-Bass; 1993.

Bergström, M. (unpublished). Lärares upplevelser av kursutvärderingar [Lecturers experiences of course evaluations].

Björnsson, L., Dahlbom, M., Modig, K., Sjöberg, A. ‘Kursvärderingssystemet vid LTH: uppfylls avsedda syften?’ [The course evaluation system at LTH: are the intended purposes achieved?] Inspirationskursen vid LTH. Lund: LTH; 2009.

Borell, J., Andersson, K., Alveteg, M., Roxå, T. ‘Vad kan vi lära oss efter fem år med CEQ?’ [What can we learn from five years with CEQ?]. In: Tempte L., ed. Inspirationskonferensen vid LTH. Lund: LTH, 2008.

Campbell, J., Smith, D., Boulton-Lewis, G., Brownlee, J., Burnett, P.C., Carrington, S., Purdie, N. Students’ perceptions of teaching and learning: the influence of students’ approaches to learning and teachers’ approaches to teaching. Teachers and Teaching: Theory and Practice. 2001; 7(2):173–187.

Centra, J., Gaubatz, N. Is there gender bias in student evaluations of teaching? Journal of Higher Education. 2000; 71(1):17–33.

Edström, K. Doing course evaluation as if learning matters most. Higher Education Research and Development. 2008; 27(2):95–106.

Fjellström, M. A learner-focused evaluation strategy. Developing medical education through a deliberative dialogue with stakeholders. Evaluation. 2008; 14(1):91–106.

Gustafsson, S., En reflektion kring kursvarderingars roll i högre utbildningWritten assignment in a teacher training course: ‘The good lecture’. Lund University, 2009. [[A reflection upon the role of course evaluations in higher education]].

Heldt-Cassel, S., Palestro, J. Kursvärdering för studentinflytande och kvalitetsutveckling. En antologi med exempel från elva lärosäten [Course evaluations for student influence and quality enhancement]. Report 2004: 23R. Stockholm: Swedish National Agency for Higher Education; 2004.

Hendry, G., Lyon, P., Henderson-Smart, C. Teachers’ approaches to teaching and responses to student evaluation in a problem-based medical program. Assessment and Evaluation in Higher Education. 2007; 32(2):143–157.

Henkel, M. Academic identity and autonomy in a changing policy environment. Higher Education. 2005; 49(1–2):155–176.

Kwan, K.-P. How fair are student ratings in assessing the teaching performance of university teachers? Assessment and Evaluation in Higher Education. 1999; 24(2):181–196.

Liaw, S.-H., Goh, K.-L. Evidence and control of biases in student evaluations of teaching. The International Journal of Educational Management. 2003; 17(1):37–43.

Lindberg-Sand, Å., Sonesson, A. Compulsory higher education teacher training in Sweden: development of a national standards framework based on the Scholarship of Teaching and Learning. Tertiary Education and Management. 2008; 14(2):123–139.

Högskola, Lunds Tekniska. Om LTH [About LTH]. Lund: Lund University, Faculty of Engineering; 2009.

Lund UniversityTillämpningen av kursvärdering och studenternas rättighetslista. Lund: Lund University, Evaluation unit, 2004. [[The use of student evaluations and the students’ rights list]. Report 2004:228.].

Nasser, F., Fresko, B. Faculty views of student evaluation of college teaching. Assessment and Evaluation in Higher Education. 2002; 27(2):187–198.

Nasser, F., Fresko, B. Faculty views of student evaluation of college teaching. Assessment and Evaluation in Higher Education. 2002; 27(2):187–198.

Olsson, T., Roxå, T. Evaluating rewards for excellent teaching – a cultural approach. In: Sutherland K., ed. Annual International Conference of the Higher Education Research and Development Society of Australasia. Rotorua, NZ: HERDSA, 2008.

Prosser, M., Trigwell, K. Understanding Learning and Teaching. The experience in Higher Education. Buckingham: Society for Research into Higher Education and Open University Press; 1999.

Ramsden, P. Learning to Teach in Higher Education. London: RoutledgeFalmer; 2005.

Roxå, T. Pedagogical courses as a way to support communities of practice focusing on teaching and learning. In: Barrie S., ed. Annual International Conference of the Higher Education Research and Development Society of Australasia. Sydney, Australia: University of Sydney, 2005.

Roxå, T., Andersson, R., Warfvinge, P., Making use of student evaluations of teaching in a “culture of quality”. Paper presented at 29th Annual EAIR Forum. Innsbruck, Austria, 2007.

Roxå, T., Mårtensson, K. Significant conversations and significant networks – exploring the backstage of the teaching arena. Studies in Higher Education. 2009; 34(5):547–559.

Roxå, T., Modig, K., Students’ micro cultures determine the quality of teaching!. Improving Student Learning for the 21st Century Learner. presentation made at the 17th Improving Student Learning Symposium. Imperial College, London, UK, 2009. [7–9 September, 2009.].

Santhanam, E., Hicks, O. Disciplinary gender and course year influences on student perceptions of teaching: explorations and implications. Teaching in Higher Education. 2002; 7(1):17–31.

Singh, G. Educational consumers or educational partners: a critical theory analysis. Critical Perspectives in Accounting. 2002; 13:681–700.

Sparr, G., Deppert, K. CEQ som rapporterande utvärdering – en kritisk granskning. Lund: Lund University, Faculty of Engineering; 2004. [[CEQ as reporting evaluation – a critical review]. Inspirationskonferensen].

Sprague, J., Massoni, K. Student evaluations and gendered expectations: what we can’t count can hurt us. Sex Roles. 2005; 53(11):11–12.

Statens Offentliga Utredningar. Frihet, ansvar och kompetens [Freedom, responsibility and competence]. Gothenburg: Swedish Ministry of Education, Graphic Systems AB; 1992.

Svensson, Å., Fridh, K., Uvo, C., Hankala-Janiec, T., Kursutvärderingar ur ett genusperspektivAssignment for the course ‘Gender-psychological aspects in teaching and learning – women, men and technology’. Lund: Lund University, Faculty of Engineering, Genombrottet, 2009. [[Course evaluations from a gender perspective]].

Swedish National Agency for Higher Education, The Higher Education Ordinance, 2000. Available online at. http://www.hsv.se/lawsandregulations/thehighereducationordinance.4.5161b99123700c42b07ffe3981.html#Chapter1 [(accessed 27 July 2010).].

Swedish National Agency for Higher EducationUtvärdering av utbildningar till civilingenjör vid svenska universitet och högskolor. Stockholm, 2006. [[Evaluation of Civil Engineering Programmes at Swedish Universities and Institutions of Higher Education]. Report 2006:8R].

Swedish National Agency for Higher EducationFrihetens pris – ett gränslöst arbete. En tematisk studie av de akademiska lärarnas och institutionsledarnas arbetssituation. Stockholm, 2008. [[The prize of freedom – working without limits. A thematic study of the working situation of lecturers and heads of departments]. Report 2008: 22R].

Tempte, L.Pedagogisk Inspirationskonferens. Lund: Lund University, Faculty of Engineering, Genombrottet, 2003.

Trowler, P. Beyond epistemological essentialism: academic tribes in the 21st century. In: Kreber C., ed. The University and Its Disciplines: Within and Beyond Disciplinary Boundaries. London: Routledge, 2009.

Warfvinge, P. Policy för utvardering av grundutbildning [Policy on evaluation of undergraduate courses]. Lund: Lund University, Faculty of Engineering; 2003.


1The Swedish higher education system is to a large extent modularised into courses, which lead to different degrees. Courses can be mandatory or selective within different programmes.

2Statens Offentliga Utredningar (Swedish Government official report).

3A national body that oversees quality in Swedish higher education.

4Translated from Swedish by the authors.

5Translated from Swedish by the authors.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset