9

Emerging trends and approaches in the student voice in the social sciences

Chenicheri Sid Nair and Patricie Mertova

Abstract:

This chapter draws on the chapters concerning student feedback in social sciences by international contributors from around the world. It summarises the key trends, issues and approaches within social science disciplines concerning student feedback.

Key words

student feedback

social sciences

international perspectives

trends

Introduction

Harvey (2003) has argued that students are important stakeholders in learning and teaching and also other processes within higher education and that consideration of their views is crucial to the quality of functioning of higher education. In line with this, universities have approached the process of better understanding and meeting the needs of their students through student evaluations. Student feedback in general serves a number of purposes. Bennett and Nair (2010) contended that these include: diagnostic feedback that will help in the development and improvement of teaching; research that will further improvements in curriculum; data for decision-making purposes for managers and supervisors; being a source of information for current and potential students to make informed choices concerning the institution and programmes; and finally, in numerous cases, judging quality of performance which in many countries is being tied to external funding formulae.

As student feedback has taken centre stage in the quality processes of higher education institutions, this book is timely in that it looks at the phenomenon from the perspective of social sciences. The chapters in this book are drawn from a number of countries, some in the developing phases of student feedback and others that have well-defined processes. This chapter looks at the contributions made in this book and attempts to draw on the messages that are emerging. What transpires from reading the chapters is that student voice has clearly been defined as a critical factor in measuring and maintaining the quality of higher education institutions and their programmes. What is also particular to this book is that the perspectives are from within social science disciplines.

Key trends, issues and approaches

With student voice being prominent within higher education institutions worldwide, there are some broad commonalities across the various chapters. These commonalities concern eight key factors that dominate the student voice in the social science disciplines. We – the authors of this chapter – perceive these factors as the emerging trends, issues and approaches.

Factor 1 – Developmental vs summative purposes

First and foremost, the shift of student evaluations from a developmental tool to teaching and institutional evidence has emerged in a number of instances within the book. This shift suggests that institutions are more concerned with having good feedback results than addressing issues by providing opportunities for those involved to improve. What comes out clearly is that there seems to be a lack of discussion on how results are used and how they can help practitioners improve their teaching in the classroom. The case studies from South Africa, Hong Kong and Japan suggest that such a move is perceived as tools of surveillance to manage those not fulfilling their roles in their institutions. This perception is not confined to developing countries or countries where the concept of student feedback is new. Research suggests that student feedback has tended to move to a more summative purpose and is considered by academics as more of a management tool rather than just a tool used to help the individual teacher improve his/her teaching (Conley and Glasman, 2008; Marshall, 2005). For example, in Japan (Chapter 7), feedback activities tend to increase competition among academics and play down the need for dialogue.

Factor 2 – Defining the need, purpose and use of feedback

The need for institutional policies that clearly define the reason for such evaluations, their purpose and use is another central theme expressed in a number of chapters. Although in many developed countries the notions of use and purpose have been defined, this is not the case for countries where student evaluations are relatively new. In Chapter 7, the Japanese experience clearly shows that although there is recognition by teachers that students have a voice, there is a need for transformational change in management to recognise that evaluations are not only summative in nature but should have a critical formative component in their use.

Factor 3 – Timing and reporting of feedback

The timing and reporting of evaluation results seems to be an issue faced in a number of higher education systems covered in this book. The issue of timing between receiving student feedback and releasing final reports is an area that has been highlighted in previous research (e.g. Ballantyne, 1999; McKeachie, 1994). The arguments revolved around the usefulness of the information gathered when the class is over, the fact that the current and possibly even the following cohort would not benefit from the changes, and the issue of accommodating frequently different needs. This ties into the effectiveness of student evaluation, as students may thus feel that their feedback is not taken into consideration (Powney and Hall, 1998). This factor is outlined succinctly in Chapter 1 on South Africa, where there is the recognition that such feedback is in a sense a mutual investment.

Factor 4 – Tools of the trade

For student feedback to be effective, it is essential to have a number of tools to evaluate for different purposes. Teacher and unit evaluations are well defined in all the chapters. Other diagnostic feedback tools that presented include: the student experience questionnaire (which looks at the total experience of students in the institution covering the curriculum and also support services); the graduate capabilities questionnaire; the alumni questionnaire; and the course experience questionnaire. Some chapters have outlined the complete set of tools utilised within particular higher education systems, such as Chapter 2 (Australia), Chapter 3 (Austria), Chapter 5 (Hong Kong) and Chapter 6 (Singapore).

Factor 5 – Closing the loop

The reflections of student feedback are evidenced by the changes that are subsequently made by individual teachers or the entire institution. In other words, there is a need to engage with the reports that result from such feedback and put in place plans of action for improvements. This concept is well summarised in a paper by Graduate Careers Australia (Graduate Careers Australia, 1999: 20):

It is a myth that all you have to do is to send back the results of a survey to those concerned and action, improvement and innovation will automatically occur. Such an assumption ignores all research on motivation and change management in universities.

Chapter 6, on the Hong Kong experience, clearly shows that only well-structured plans can achieve the best outcomes to benefit the students.

Factor 6 – Training and development

With the gathering of student feedback comes the issue of ‘deciphering’ the data before any action can be taken by the academic, department, school, faculty or the institution. The Austrian and South African case studies suggested that there is a need to develop an effective approach to training staff to interpret the results. One area that it seems to suggest is that institutions should utilise those trained in the area of pedagogy to help formulate resources, and support an easy transition from data to change outcomes.

Factor 7 – Move towards electronic feedback

The trend towards electronic or web-based feedback is apparent in student surveys around the world. However, the use of paper-based surveys is still relevant depending on the cultural needs, course design, or stage of development in the student feedback process. An example of such a transition towards online evaluations is illustrated in the Austrian case study (Chapter 3). The positives and negatives of online student feedback surveys have also been reported by Bennett and Nair (2010).

Factor 8 – Qualitative and quantitative tools

A noticeable trend in many of the chapters is the feedback tool design. There is clear recognition that both the quantitative and qualitative components play a critical part in achieving a total picture of the perceptions and needs. Chapter 2 (Australian context) shows how student feedback can be analysed and used for effective change. Chapter 4 (UK context) also illustrates how feedback data have been used to interpret and institute change.

Concluding remarks

Although many of the factors discussed in this book appear in the social science contexts, a review of the research literature has shown no difference in student feedback, developments and actions in other disciplines (e.g. Nair and Mertova, 2011). What is more prominent in the social science contexts is a greater recognition of qualitative comments in getting a deeper understanding of student experience which has been documented throughout the book.

This book adds to the debate taking place within disciplines that there are significant differences in student voice, and it should not be used to generalise the student experience. Although this debate will continue, especially if one looks at specifics like teacher evaluations and item structure, the international contributions in this book suggest the issues, trends and approaches faced in understanding and improving the student voice remain almost identical, and this is reflected in the earlier two publications by the authors (Nair and Mertova, 2011; Nair et al., 2012).

References

Ballantyne, C., Showing Students You’re Listening: Changes to the Student Survey System at MurdochMartin K., Stanley N., Davison N., eds. Teaching in the Disciplines/ Learning in Context. Proceedings of the 8th Annual Teaching Learning Forum, The University of Western Australia. 1999 February 1999 (Perth: UWA) http://lsn. curtin. edu. au/tlf/tlf1999/ballantyne. html [accessed June 2012].

Bennett, L., Nair, C. S. A Recipe for Effective Participation Rates for Web-based Surveys. Assessment and Evaluation Journal. 2010; 35(4):357–366.

Conley, S., Glasman, N. S. Fear, the School Organization and Teacher Evaluation. Educational Policy. 2008; 22(1):63–85.

Australia, Graduate Careers. Institutional Arrangements for Student Feedback. Melbourne: Graduate Careers Council of Australia; 1999.

Harvey, L. Student Feedback. Quality in Higher Education. 2003; 9(1):3–20.

McKeachie, W. J. Teaching Tips (9th edn). Lexington DC: Heath and Company; 1994.

Marshall, K. It’s Time to Rethink Teacher Supervision and Evaluation. Phi Delta Kappan. 2005; 86(10):727.

Nair C. S., Mertova P., eds. Student Feedback: the Cornerstone to an Effective Quality Assurance System in Higher Education. Cambridge: Woodhead Publishing, 2011.

Nair C. S., Patil A., Mertova P., eds. Enhancing Learning and Teaching through Student Feedback in Engineering. Cambridge: Woodhead Publishing, 2012.

Powney, J., Hall, S. Closing the Loop: the Impact of Student Feedback on Students’ Subsequent Learning. University of Glasgow: The SCRE Centre; 1998.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset