3

Feedback cycles or evaluation systems? A critical analysis of the current trends in student feedback in Austrian social sciences

Oliver Vettori and Nina Miklavc

Abstract:

This chapter provides a brief overview of the socio-historical genesis of institutionalised student feedback in the Austrian social sciences. The authors discuss recent legal, political, social and educational developments in this field and relate them to similar developments in a broader European context. Four major trends are identified: the diversification of feedback forms and emergence of new standardised feedback methods; the shift from student feedback as an isolated instrument of quality assurance towards their integration into institutional quality management systems; the shift from feedback on the teachers’ performance to feedback on learning processes and learning outcomes; and the shift from interactive and/or paper and pencil feedback forms towards online evaluations and electronic feedback. The authors conclude the chapter by discussing the implications and manifest consequences of such developments and critically analyse their impact on the nature of student feedback itself.

Key words

student feedback

teaching evaluation

quality assurance

trends in higher education

Introduction

One of the fundamental tools of social science disciplines is communication as the basis of relationships and interactions between individuals in any kind and size of social community. This is how social sciences work, both in research and teaching students. Undoubtedly collecting feedback – a two-way communication process, a dialogue (Carless, 2006; Hyatt, 2005), a multidirectional transmission of information – is an important and inherent element of social sciences.

In spite of the importance of the communicative element in feedback, it is surprising that student feedback in Austrian social sciences and generally in Austrian higher education institutions has a rather short history, at least in its current systematic form. Student feedback in Austrian higher education is largely driven by the enacted laws and regulations as well as by the socio-political developments on the national and international level (e.g. Bologna Process). The first part of this chapter will thus focus on matters related to context and provide some insight into the history of and current situation in Austrian higher education.

The resulting tension regarding student feedback between compliance with regulations and accountability and the utilisation of feedback as an instrument for improvement is a common issue in the Austrian discourse about higher education. In light of this, four major trends – illustrated in the second part of this chapter – are noticeable:

image the shift from more than 20 years adherence to traditional questionnaire student evaluations of teaching towards a broader spectrum of methods and approaches

image the shift from student feedback as an isolated instrument of quality assurance towards its integration into institutional quality management systems

image the shift from feedback on the teachers’ performance to feedback on learning processes and learning outcomes

image the shift from interactive and/or paper and pencil feedback forms towards online evaluations and electronic feedback.

This chapter concludes with a critical appraisal of the four trends and attempts to outline where such developments might lead in the near future.

First, the terminology used in this chapter needs to be clarified. The German terminology currently used for student feedback is not distinct enough. In fact, research (e.g. Schmidt and Loßnitzer, 2010) indicates that expressions differ enormously and there are no well-defined and universal terms. ‘Feedback’ and ‘evaluation’ in particular are usually used synonymously, at least in the area of student feedback. Often no distinction is made between ‘feedback’, with its major focus on improvement, and ‘evaluation’, which is aimed at accountability. The equation of these two terms explains at least part of the tension mentioned above. In this chapter, ‘student feedback’ is used when writing about the concept per se, and ‘course evaluations’ or ‘student evaluations of teaching’ when referring to the particular form.

The ‘peculiar sector’: an overview of the Austrian higher education system

Overall, the Austrian higher education environment is complex, consisting of public universities, private universities, universities of applied sciences (Fachhochschulen or FH) and university colleges of teacher education (Pädagogische Hochschulen). All these have a different legal basis and are even the responsibility of two different ministries. The overview which follows will be limited to the first three sectors; more than 90 per cent1 of all students in Austrian higher education are enrolled in these. The 21 public universities are by far the oldest (the University of Vienna dates back to the fourteenth century) and largest institutions. They cover numerous academic disciplines and offer a broad range of educational programmes. The FH entered the field in the academic year 1994/95, representing a new type of higher education institution with a major focus on vocational training and (initially) rather weak research orientation. Compared with the public universities, the number of students enrolled is still rather low,2 yet this sector’s rapid growth in combination with its different legal status (unlike the public universities, FHs can select their students by means of entrance examinations and are funded on the basis of student numbers) causes tension with the other sectors. The private universities came into existence in 1999; however, this sector is still of minor relevance in terms of size3 and influence. In order to offer educational programmes leading to an academic degree, these institutions have to be recognised by the state, usually in the form of an official accreditation.

In many ways, Austrian higher education is characterised by a multitude of apparent and actual contradictions and paradoxes, as can be seen by the discrepancy between the country’s considerable investment in its education system and the system’s lack of effectiveness (e.g. OECD, 2010) or the well-known, although still ignored, relationship between the free access policy in public higher education and the comparably high drop-out rate and low ratio of academics in society (ibid.). The free access policy in particular is a prominent bone of contention, with only a few exceptions (e.g. medicine or arts). Public universities cannot select their students by any means: every student who holds the appropriate school leaving exam certificate or an equivalent is regarded as qualified to enter any field of studies s/he is interested in (‘entitlement system’, cf. Pechar and Pellert, 2004). Pechar and Pellert (2004), for instance, have been leading critics of this system, pointing out that ‘no other educational sector in Austria is subject to such strange regulations’ (320). Yet arguably the most problematic aspect lies in the incongruity between the free access policy and the universities’ funding, as the universities’ student capacities are not factored in their respective budgets. As a consequence, most public universities are seriously underfunded with regard to their student numbers and can only manage through utilising large class teaching and electronic substitutes for individual mentoring, and by generating high drop-out rates in the first year of studies.

This unsatisfactory situation in the public sector culminated in autumn 2009, when further cuts in the educational system were demanded by the government, resulting in huge student protests.

From ‘teaching censorship’ to ‘quality assurance’: a brief historical tour of Austrian feedback mechanisms in higher education

Systematic student feedback in Austrian universities emerged in the late 1960s4 and since then has developed relatively slowly. At that time, during the so-called ‘student revolution’, Austrian students implemented a kind of ‘teaching censorship’, that focused on socio-critical aspects of the hierarchical higher education system (Preißer, 1992). The decidedly critical framing of student feedback and its frequent use as a means of conflict made it an issue that was neither welcomed nor much respected by the lecturers. In the 1970s, the situation calmed down somewhat, yet the instruments remained disputed as the first questionnaires in the German language proved to be severely methodological flawed. Nevertheless, the first legal steps to implementing and institutionalising student feedback were made by including some basic elements of performance monitoring and reporting in the Universities Act 1975 (UOG 1975).

Overall, however, it took another decade for student feedback to be revived and start to prosper. In the late 1980s another student feedback initiative was introduced by the Austrian Student Union (Österreichische Hochschülerschaft or ÖH), with a major focus on gathering information on course offers. Simultaneously, changes in the political culture and legal frameworks increased the relevance of quality (assurance) in the higher education discourse. Yet the intentions of the Ministry of Science were quite different from those of the students; for the political authorities student feedback was ‘discovered’ as an instrument of accountability and judgment (Spiel and Gössler, 2001) and has been used as such since.

With the Universities Act 1993 (UOG 1993), evaluations in general became an integral part of the Austrian university system. Yet without any practical know-how upon which to build, the corresponding regulations barely came to life. Gradually the media (i.e. renowned weekly magazines Der Spiegel and Profil) drew attention to another facet, as student feedback (and, occasionally, peer assessment) became a part of the increasingly popular rankings of institutions and study programmes.

From the early 2000s, the national level was more and more dominated by developments at the European policy level, particularly the so-called Bologna Process5 that has led to major national reforms in the signatory countries (Loukkola and Zhang, 2010, 2010; Westerheijden et al., 2007). With regard to quality assurance, the European dimension was an important influence on shaping and legitimating the relevant frameworks and processes and institutionalising student feedback measures in Austria. After the Bologna Declaration was signed in 1999, some major organisational reforms were brought about in Austria, including new legal status for the public universities, strengthening of university management, and the requirement to develop an integrated institutional quality management system.

Most of these reforms are included in the Universities Act 2002 (Universitätsgesetz 2002; UG 2002), which also dedicates a special paragraph6 to the issue of institutional quality management. After a short introductory passage, however, the paragraph is almost entirely dedicated to evaluations and how they should be conducted. Student evaluations of teaching are no longer explicitly stated but remain one of the most frequently used evaluative instruments. This issue is further discussed in the next section.

Emerging trends

As we have shown in the short historical overview, Austrian higher education has experienced its share of structural reforms and political developments over the past decades. It has also became apparent that, in many ways, the history of student feedback mirrors broader changes in this field. The four trends analysed in more detail in this chapter could similarly be seen as manifestations of more general trends. Yet at this point, the four emerging trends found to be the most obvious and relevant – some already well-established, some only beginning to emerge – primarily provide a picture of the current state of student feedback in Austrian social sciences.

A few words of caution are necessary. The picture that is drawn here is far from complete. The four trends described below were selected on the basis of our long and professional experience (including as students, teachers, administrators and managers) in Austrian higher education, as well as on the basis of regular discussions with other professionals and experts in this field. Without extensive research within the classrooms and comprehensive interviews with students and teachers alike, it is quite likely that we have omitted other trends, which could either be very new and evolving or localised, and thus difficult to recognise or articulate more broadly.

Trend 1: the diversification of feedback forms and emergence of new standardised feedback methods

Student evaluations are probably one of the most common sources used in feedback on teaching in Austrian higher education. In fact, student questionnaires are the most common way for institutions to receive feedback, and therefore an essential instrument for quality assurance across Europe (Sursock, 2011). They belong to the ‘standard equipment’ of every Austrian higher education institution and, for a long time, have been equated with quality assurance. However, students are not the only source from which feedback on teaching quality can be obtained. Courses can also be viewed from the perspectives of colleagues, co-lecturers or internal or external experts (through peer observation), and graduates, or by self-reflection. In fact, for a complete and fine-grained picture an integrated approach might be necessary (Westerheijden et al., 2007).

To draw a detailed picture and meet the different purposes of students and teachers alike, a more diverse range of methods emerged. The most popular ‘new’ approaches are fast feedback methods (similar to the ideas of ‘classroom assessment’ and ‘student activation’), peer feedback and graduate surveys, with the latter becoming the latest ‘must have’ among institutional quality assurance instruments.7

Fast feedback provides a multi-faceted alternative to the standardized and compulsory course evaluations, with a great variety of methods and tools that are easily applicable and usually more informative than pre-scaled questionnaires. It can be used at different points of time (e.g. at the beginning, during or at the end of a course) and for various purposes, such as checking whether the students are at a similar level of knowledge at the beginning of a class, assessing if their learning progress is in line with the course objectives or analysing the strengths and weaknesses of a course from the students’ perspective. Fast feedback is usually strongly personalised and contextualised and is thus not suited for comparative purposes or quantitative analyses. This makes it a popular approach for teachers and support centres, but less popular with most institutional quality assurance centres, which rely on the apparent comparability of data. In the Austrian context, fast feedback is currently gaining more attention at the institutional level, yet there is practically no evidence on how it is used by teachers and programme managers.

The increased popularity of graduate surveys can at least partly be attributed to the employability discussion that has accompanied most curricular reforms in the wake of the Bologna Process. Information on job placement and the average income of graduates is becoming more and more relevant as ‘quality indicators’, within the institutions as well as for external higher education rankings. Consequently, one of the key functions of the newly emerging graduate surveys is to shed light on the alumni’s careers. Yet even though the feedback function might be secondary to this career monitoring, almost all respective instruments contain evaluative sections where former students are asked for their opinion on aspects of their education, quality of teaching, or the university’s student services. The data processed from these standardized questionnaires is sometimes fed into institutional quality management systems (cf. trend 2), yet generally has the same disadvantage as course evaluations – the results can indicate certain areas of improvement but rarely suggest what could actually be improved. A low level of satisfaction provides little information on the specific source of the dissatisfaction. In addition, the purpose of the survey is seldom specified and it is unclear who the recipients of the feedback are.

Trend 2: the shift from student feedback as an isolated instrument of quality assurance towards its integration into institutional quality management systems

In this brief history of feedback mechanisms in Austrian higher education, the UG 2002 has been indicated as an important factor with regard to the issue of quality management and quality assurance. Until the late 1990s, formalised internal quality assurance procedures were practically non-existent. Evaluations were de facto equated with student satisfaction surveys at the end of a course (cf. Stifter, 2002) – a situation that was hardly unique to Austria but could and still can be observed in most European countries (Loukkola and Zhang, 2010, 2010; Sursock, 2011). This raised some interesting questions, and although such satisfaction surveys and course evaluations were common enough, the initial enthusiasm soon diminished. This was because students did not perceive any impact from their evaluations and teachers were often unsure of how to interpret the results, as the feedback processes usually ended with an aggregated analysis of results and a compilation of reports by an administrative unit.

At least in theory, the situation has changed with the establishment of institutional quality management systems as required by the UG 2002. Overall, the requirements are very general, as the respective paragraph in the legislation only demands that universities develop their own quality management systems in order to assure quality and the attainment of their performance objectives. The specific design of such a quality system, the choice of quality management instruments and procedures, and the decision as to which processes were implemented on what organisational level was left up to the universities (cf. Hanft and Kohler, 2007: 84). On the surface, this is very much in line with the Bologna requirements, since the Berlin Declaration (2003) explicitly states that the primary responsibility for quality assurance lies with each higher education institution. A closer look, however, quickly reveals the underlying understanding of quality assurance. Apart from the general opening passage concerning the development of a quality management system, the entire paragraph shows a preoccupation with evaluations. This led to two important developments. First, when looking for a solution integrating their evaluations into a quality management system, many institutions oriented themselves at the Deming Cycle (cf. Deming, 1982) or derivatives with a similar logic of closed ‘plan-do-act-check’. Secondly, when looking for suitable evaluations that could be integrated into such systems, institutions soon realised they could use their best established evaluation mechanism, i.e. student evaluations of teaching. As a consequence, such evaluations were partly integrated in quality cycles of their own, or the results were at least reframed as management information data that would/could inform staff decisions. Practically, however, public services legislation and the actual cultural context prevent most universities from (mis)using student evaluations as a rigorous performance monitoring tool. This situation is a little different for most FH, which underwent this development five to 10 years earlier and are regulated by a different legal framework.

The resulting consequences can be regarded as a sort of mixed blessing. On the one hand, the much-discussed issue of lacking follow-up procedures has gained new momentum, leading to new process models at least on the conceptual level. On the other hand, the reframing of student evaluations as data of managerial relevance that would be used by institutional management overshadowed the original feedback functions and raised new issues of trust, anonymity and control. This argument will once again be raised in the concluding section.

Trend 3: the shift from feedback on the teachers’ performance to feedback on learning processes and learning outcomes

The shift from lecturer-oriented to student-centred teaching is probably another development within the Austrian higher education context that has been instigated by the Bologna Declaration.8 In line with the Bologna objectives, teaching is seen more and more as an educational process that focuses on developing skills (competencies) and promoting attitudes instead of merely delivering teaching content. In the past five or so years, practically all public universities and FH have rewritten most of their curricula and syllabi, replacing ‘teaching objectives’ with ‘learning outcomes’. However, there is little evidence as to whether these changes reach the level of actual course designs and learning and teaching strategies, or are merely cosmetic. The increasing number of conferences, workshops, guidelines and staff development activities focused on learning processes, learning outcomes and assessment forms suggests that the issue is getting considerable attention in the Austrian higher education community. Such developments are at least partly mirrored in the feedback forms used at the institutions. Apart from a visible tendency towards fast feedback forms and classroom assessment (as has already been described in trend 1), an observed impact is also on the more traditional course evaluations. In many cases, this impact is limited to a reformulation of items (e.g. checking whether the course’s learning outcomes were clearly defined or assessing the learning experience), but in other cases the whole evaluation logic has been reworked in order to fit the new learning outcome focus.

Two universities in particular – the University of Graz and the University of Applied Arts, Vienna – stand out. The University of Applied Arts has fully implemented its TELOS model (Teaching Evaluation, Learning Outcome Sustained) since 2009. The new approach to course evaluations has never been utilised before at the level of the whole institution. This covers a full PDCA cycle of individual teaching and thereby follows the strategy of student-centred course evaluations ’(cf. Kernegger et al., 2009). First, the lecturers choose the concrete competencies their course should help develop, selecting from different types of competencies such as specific knowledge, methodological and practical skills, social skills and personal skills. They can then freely formulate the learning outcomes for their courses, as long as these comply with the university’s mission and strategic goals. This option offers highly individualised feedback from the students in each course. ‘Lecturers also declare how their courses contribute to the objectives of study programmes and to overall objectives of the university in order to secure the institutional embedding of a highly individualised approach to evaluation’ (cf. Kernegger et al., 2009).

At the Karl-Franzens-University in Graz, course evaluations are well-established, although almost entirely focused on the students’ assessment of the teaching performance. In the academic year 2008/09, the university began to implement its GEKo model (Grazer Evaluationsmodell des Kompetenzerwerbs). The evaluation now is based on newly designed questionnaires: ‘in which students are asked to assess their attainment of [the] various competency domains within a course’ (Peachter et al., 2007). The GEKo model defines various dimensions that can be assessed: i.e. specialized knowledge and understanding in the field of studies; methodological and practical skills (e.g. applying the lessons learned); social skills (e.g. working in teams, interaction with others); personal skills (e.g. self-management); and media literacy (e.g. competency in using new media) (Dorfer et al., 2010). Different types of courses develop different types of competencies, and teachers can also include their own questions in the questionnaires. Other aspects that are taken into consideration are the didactics of the teaching staff and the gender dimension. The underlying assumption is that making teaching staff more receptive and sensible to gender-equality by using appropriate language (e.g. actors and actresses) or by avoiding gender role stereotyping makes students aware of social contexts (Moerth and Hey, 2006).

It is still too early to see how this new type of student feedback impacts teaching – the teachers’ performance can only be indirectly deduced from this and satisfaction items play a minor role. The institution aims to use this in the long-term. How well this trend will work with the requirement of data delivery for management purposes remains to be seen, although some indications have been described in trend 2. So far, no other institution has followed the example, but the redesign has been a source of a considerable debate, and it certainly has its strengths in its compatibility with the Bologna Process. It will be interesting to see whether this approach is adopted as widely as the previous two trends.

Trend 4: the shift from interactive and/or paper and pencil feedback forms towards electronic feedback

The internet and various other means of electronic communication have rapidly and increasingly impacted the daily lives of individuals in recent years. In 2010, 73 per cent of Austrian households had internet access (cf. Statistik Austria).9 Nowadays, higher education institutions are confronted with highly web-literate students, who spend a substantial amount of time online and gather information on the internet as an extension of their everyday communications. The so-called Net Generation students (Oblinger and Oblinger, 2005) grew up using Google as a convenient, flexible and especially fast first point of entry to information, rather than newspapers or books. Additionally, the internet is easily accessible by using PDAs or wireless networks (Kvavik and Caruso, 2005).

In recent years, Austrian higher education institutions have increasingly reacted to this development, introducing social media and web 2.0 applications as a means of informing and communicating with their students. Web-based student feedback tools are becoming a popular element of such new approaches to communication with students adopted by universities. Two examples are listed below.

image E-feedback boxes (e.g. at WU, Vienna University of Economics and Business): such electronic boxes serve as a platform which students can use – anonymously or not – to post their statements, wishes and ideas, and take part in the continuing improvement process of learning and teaching. If the students wish to receive an answer, they can leave their name and email address. In matters of general interest, the topics raised can be published on an easily accessible teaching platform.

image Online student evaluations of teaching: the implementation of online student questionnaires for teaching evaluations is one of the most noticeable trends across Austrian tertiary institutions. Online evaluations are expected to be less costly (as no printed questionnaires are required), demonstrate a modern image of the university, and are comparatively fast to process, so evaluation results can be promptly reported to students and teaching staff (Tinsner and Dresel, 2007), which supports timely feedback discussion in class. In addition, Donovan et al. (2006) found that students who were giving their feedback online wrote more and longer comments than their colleagues using printed forms, and the comments even included specific reasons for the students’ judgment.

Yet the challenges of web-based evaluations should not be underestimated. The major concerns include the low response rate. Surveys show that many higher education institutions utilise incentives to increase students’ participation and response rates in online evaluations. These range from systematic reminder emails to the practice that students can only register for new courses if they have completed the online evaluations (Bennett and Nair, 2011). The FH Wien University of Applied Sciences, for instance, provides freely accessible IT-facilities for each course on the day of the final class and thereby attempts to motivate students to evaluate the course as soon as it has ended.

So far, there is no evidence in the Austrian context which suggest that results of online evaluations are more often discussed in the classroom than those of traditional paper and pencil evaluations (because the results could be processed much quicker), or even less often (because the feedback was not even given during class time). There is, however, a real danger of the online format replacing face-to-face interactions, particularly if not only the standardised evaluation questionnaires are organised in this way, but also fast feedback forms as described in trend 1. learning and teaching platforms are already offering ‘feedback buttons’ and similar solutions, which might lower the personal threshold of actually giving feedback but also make it more difficult to engage in a dialogue between teacher and student. Our final conclusions will thus aim at the significance of student feedback and also give a critical appraisal of the trends and practices currently used in Austrian higher education.

Relevance of student feedback in Austrian social sciences

To sum up the previous observations, the current trends and changes in feedback mechanisms within Austrian higher education occur on different levels:

image on the instrumental level, there are diversified approaches and tools and a clear tendency to utilise the new media for feedback purpose

image on the conceptual level, the shift from a teacher-centred paradigm of higher education to a learner-oriented paradigm is mirrored in an increased focus on learning outcomes and students’ learning experiences

image on a functional level, student evaluations of teaching are shifting from standalone instruments of quality assurance to becoming integral parts of institutional quality management systems.

Taking these observations one step further, however, reveals some further changes that occur at a more latent, deeper level. It is too early to state definite trends, but there are already some observable tendencies which affect the core of feedback ideas and purposes.

First, feedback instruments are increasingly used for purposes other than improvement. As part of the legally-based transformation in the last decade, Austrian higher education institutions can now autonomously decide upon the instruments that contribute to effective and efficient management and leadership. While allowing significant latitude, the enacted law explicitly outlines the way in which evaluations in all Austrian education sectors should be implemented (Kohler, 2007). This is also how Austrian higher education institutions translate the systematic approach to the quality assurance discourse (Kohler, 2009). Nevertheless, such evaluation results – without any formative or improving character – are often utilised as monitoring instruments. In addition, due to legal changes, pressure to legitimise the allocation of resources and staff is increasing. Universities evaluate teaching in order to legitimise increasing costs, so accountability is in the spotlight rather than improvement. It is debatable whether student evaluations of teaching have ever been ‘true’ feedback instruments, considering their origins in the context of the Austrian student revolution. Yet the tendency to use the results as performance monitoring data in order to feed managerial decisions is undoubtedly a product of the quality management discourse of the past six years.

Another reason for the growing importance of evaluations – yet with the purpose of accountability in mind – is represented by the increasing national and international competition of post-secondary education institutions. Most of the recent trends and uses in student feedback and evaluations are directly or indirectly attributed to the Bologna Process and to manifold changes, such as the massification and diversification of higher education, the challenges of resourcing higher education and the increased demand for ‘accountability’ (cf. Vettori et al., 2007; Hodson and Thomas, 2003; Schnell and Kopp, 2000). The related focus on higher education rankings also significantly affects the choice and implementation of quality measures.

Along with socio-political and legal requirements, the role of students in the process of feedback and evaluations has also been considerably redefined. The role of students in higher education has moved from university members to institutional stakeholders or even ‘customers’, whose participation is more and more reduced to providing evaluation data or other documentation (cf. Vettori and Lueger, 2011). The definition of student feedback as ‘the expressed opinions of students about the service they receive’ (cf. Harvey, 2001) has, therefore, gained importance. Student feedback from the teachers’ perspective is rapidly losing its dialogue component.

Finally, with the implementation of the instruments getting more attention than the question of suitable follow-ups, or, even more importantly, the issue of actually developing the students’ feedback competencies, student feedback is becoming more and more formalised. This is not only reframing feedback as a burden instead of an opportunity, but could even lead to a situation where the participants in the feedback process who could actually gain the most from a well-developed feedback culture are the ones who are most disappointed. This is, admittedly, a rather bleak picture, which at present is far from being realised. However, the question of what can be actually learned from feedback is clearly as important as ever.

Although most of these discussions, findings and arguments have been presented in the context of social science, it is the authors’ belief that the major trends identified arise independently from the disciplines and fields of study and therefore are valid across a much wider range of disciplines within the Austrian context.

References

Bennett, L., Nair, C. S. Web-based or Paper-based Surveys: a Quandary? In: Nair C. S., Mertova P., eds. Student Feedback – The Cornerstone to an Effective Quality Assurance System in Higher Education. Cambridge: Woodhead Publishing; 2011:119–131.

Carless, D. Differing Perceptions in the Feedback Process. Studies in Higher Education. 2006; 31(2):219–233.

Deming, W. E. Quality, Productivity and Competitive Position, 1982. [(MIT Center for Advanced Engineering Study)].

Donovan, J., Mader, C. E., Shinsky, J. ‘Constructive Student Feedback: Online vs. Traditional Course Evaluations’ Journal of Interactive Online Learning. 2006; 5(3):283–296.

Dorfer, A., Maier, B., Salmhofer, G., Paechter, M. Bologna Prozess und kompetenzorientierte Lehrveranstaltungsevaluierung: GEKo – Grazer Evaluationsmodel des Kompetenzerwerbs. In: Pohlenz P., Oppermann A., eds. Lehre und Studium professionell evaluieren: Wie viel Wissenschaft braucht die Evaluation. Bielefeld: Universitätsverlag Webler; 2010:167–178.

El-Hage, N. Evaluation of Higher Education in Germany. Quality in Higher Education. 1997; 3(3):225–233.

Hanft, A., Kohler, A. Qualitätssicherung im österreichischen Hochschulsystem. Zeitschrift für Hochschulrecht. 2007; 6:83–93.

Harvey, L. Student Feedback: a Report to the Higher Education Founding Council for England. Birmingham: Centre for Research into Quality, University of Central England in Birmingham, 2001.

Hodson, P., Thomas, H. Quality Assurance in Higher Education: Fit for the New Millennium or Simply Year 2000 Compliant? Higher Education. 2003; 45(3):375–387.

Hyatt, D. Yes, A Very Good Point! A Critical Genre Analysis of a Corpus of Feedback Commentaries on Master of Education Assignments. Teaching in Higher Education. 2005; 10(3):339–353.

Kernegger, B., Campbell, D. F. J., Frank, A., Gramelhofer-Hanschitz, A. A. TELOS – Teaching Evaluation, Learning Outcome Sustained: an Individual Way of Course Evaluation, Designed for the University of Applied Arts Vienna. paper presented to the EQAF 2009, 19–21 November, Copenhagen, Denmark http://www. uni-ak. ac. at/stq, 2009.

Kohler, A. Quality Assurance in Austrian Higher Education – Features and Challenges. ENQA, Workshop Report 8, Current trends in European Quality Assurance, 2007.

Kohler, A. Evaluation im österreichischen Hochschulsystem. In: Widmer T., Beywl W., Fabian C., eds. Evaluation – Ein systematisches Handbuch. Wiesbaden: VS Verlag für Sozialwissenschaften; 2009:177–192.

Kvavik, R. B., Caruso, J. Key Findings: Students and Information Technology: Convenience, Connection, Control, and Learning, 2005. [(ECAR Key Findings)].

Loukkola, T., Zhang, T. Examining Quality Culture: Part I – Quality Assurance Processes in Higher Education Institutions. (EUA Publications) www. eua. be/pubs/Examining_Quality_Culture_Part_1. pdf, 2010.

Moerth, A. P., Hey, B. Geschlecht und Didaktik (Graz: Koordinationsstelle für Geschlechterstudien. Grazer Universitätsverlag): Frauenforschung und Frauenförderung der Karl-Franzens-Universität; 2006.

Oblinger, D. G., Oblinger, J. L. Educating the Net Generation (EDUCAUSE). www. educause. edu/educatingthenetgen/, 2005.

OECD. Education at a Glance 2010: OECD Indicators, 2010. [(OECD Publishing)].

Paechter, M., Maier, B., Dorfer, A., Salmhofer, G., Sindler, A. Kompetenzen als Qualitätskriterien für universitäre Lehre: Das Grazer Evaluationsmodell des Kompetenzerwerbs (GEKo). In: Kluge A., Schüler K., eds. Qualitätssicherung und -entwicklung an Hochschulen: Methoden und Ergebnisse. Lengerich: Pabst; 2007:83–93.

Pechar, H., Pellert, A. Austrian Universities Under Pressure From Bologna. European Journal of Education. 2004; 39(3):317–330.

Preißer, R. Verwirklichungsbedingungen der Evaluation der Lehre und der Verbesserung der Lehre: Konsequenzen aus den bisherigen Erfahrungen mit Lehrveranstaltungskritik. In: Grühn D., Gattwinkel H., eds. Evaluation von Lehrveranstaltungen. Berlin: Zentrale Universitäts-Druckerei; 1992:197–217.

Schmidt, B., Loßnitzer, T. Lehrveranstaltungs evaluation: State of the Art, ein Definitionsvorschlag und Entwicklungslinien. Zeitschrift für Evaluation. 2010; 9(1):49–72.

Schnell, R., Kopp, J. Theoretische und methodische Diskussionen der Lehrevaluationsforschung und deren praktische Bedeutung. Forschungsbericht. Geförderten Forschungsprojektes ‘Fakultätsinterne Evaluation der Lehre – die Weiterentwicklung des bisherigen Evaluationskonzepts’ (Universität Konstanz) http://kops. ub. uni-konstanz. de/bitstream/handle/urn:nbn:de:bsz:352-opus-6054/evaluationsprojekt_schlussbericht. pdf?sequence=1, 2000.

Spiel, C., Gössler, M. Zwischen Selbstzweck und Qualitätsmanagement – Quo vadis, evaluatione? In: Spiel C., ed. Evaluation universitärer Lehre – zwischen Qualitätsmanagement und Selbstzweck. Münster: Waxmann Verlag; 2001:9–20.

Stifter, E. M. Qualitätssicherung und Rechenschaftslegung an Universitäten – Evaluierung universitärer Leistungen aus rechtsund sozialwissenschaftlicher Sicht. In: Brünner C., Mantl W., Welan M., eds. Studien zu Politik und Verwaltung. Böhlau: Wien, 2002.

Sursock, A. Examining Quality Culture Part II: Processes and Tools – Participation, Ownership and Bureaucracy. (EUA Publications) www. eua. be/pubs/Examining_Quality_Culture_Part_II. pdf, 2011.

Tinsner, K., Dresel, M. Onlinebefragung in der Lehrveranstaltungsevaluation: Ein faires, verzerrungsfreies und ökonomisches Verfahren? In: Klugeand A., Schüler K., eds. Qualitätssicherung und -entwicklung in der Hochschule: Methoden und Ergebnisse. Lengerich: Pabst; 2007:193–204.

Vettori, O., Lueger, M., Knassmueller, M., Dealing with Ambivalences – Strategic Options for Nurturing a Quality Culture in Learning and TeachingEuropean University Association, ed. Embedding Quality Culture in Higher Education. A Selection of Papers from the 1st European Forum for Quality Assurance. EUA, Brussels, 2007:21–27.

Vettori, O., Lueger, M. No Short Cuts in Quality Assurance – Theses from a Sense-making Perspective. In: European University Association, ed. Building bridges: Making Sense of Quality Assurance in European, National and Institutional Contexts. Brussels: EUA; 2011:50–55.

Westerheijden, D. F., Hulpiau, V., Waeytens, K. From Design and Implementation to Impact of Quality Assurance: an Overview of Some Studies into What Impacts Improvement. Tertiary Education and Management. 2007; 13(4):295–312.


1According to Statistik Austria, 350 247 students were enrolled at Austrian higher education institutions in the academic year 2010/11, of which 327 950 were at public universities, FHs and private universities; http://www.statistik.at/web_de/statistiken/bildung_und_kultur/formales_bildungswesen/universitaeten_studium/index.html [accessed 16 October 2011].

2According to uni:data the number of students enrolled at universities of applied sciences in Austria was 37 564 for the academic year 2010/11; http://eportal.bmbwk.gv.at/portal/page?_pageid=93,499528&_dad=portal&_schema=PORTAL&E1aufgeklappt=4 [accessed 2 October 2011].

3According to Statistik Austria, the number of students enrolled at private universities was 6301 for the academic year 2010/11 http://www.statistik.at/web_de/statistiken/bildung_und_kultur/formales_bildungswesen/universitaeten_studium/index.html [accessed 16 October 2011].

4The early stages of Austrian student feedback developed analogously to those in Germany (El-Hage, 1997).

5The Bologna Process is a legally not binding declaration .shared by institutions across 46 European countries. For further details see: http://ec.europa.eu/education/higher-education/docl290_en.htm [accessed 05 October 2011].

6§ 14, section 1 states that the universities are to develop their own quality management system in order to assure quality and the attainment of their performance objectives. The specific design of such a quality system, the concrete choice of quality management (QM) instruments and procedures, the definition of the competences of the internal QA units and the decision which processes are on what organisational level was and still is basically left to the universities (Hanft and Kohler, 2007, p. 84).

7This trend is based on observations at conventions in Europe and has recently been discussed at higher education conferences, such as: the Online Educa Berline 2011 conference and exhibition, 1–2 December 2011, http://www.online-educa.com/; the sixth EQAF, 17–19 November 2011, Antwerp, Belgium, http://www.eua.be/eqaf-antwerp.aspx; and the fifth EQAF, 18–20 November 2010, Lyon, France, http://www.eua.be/EQAF-Lyon.aspx.

8For detailed information see: http://ec.europa.eu/education/policies/educ/bologna/bologna.pdf.

9http://www.statistik.at/web_de/statistiken/informationsgesellschaft/ikt-einsatz_in_haushalten/index.html [accessed 17 October 2011].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset