2

Student feedback in the US and global contexts

Fernando F. Padró

Abstract:

Today’s accountability processes for university performance require the use of student feedback for an increasing number of aspects of what universities do. This has to be done in a systematic way and made part of an institution’s decision-making process. The challenge to the use of student feedback is the purpose behind the collection and analysis of this information. External stakeholders prefer the use of a customer-focused model in which what universities do is defined as a service. Academic staff and supporters of academe prefer an approach that recognises the traditional norms of academic performance. This chapter discusses the controversies, dilemmas and issues of using student feedback from the lens of evaluating the performance of academic units and staff in the USA, where the practice of generating student ratings is a long-standing one and where many of the concerns regarding the use of student feedback were initially raised.

Key words

institutional quality

meta-professional model

performance data

SERVQUAL

student feedback

student ratings

university accountability

Introduction

St John, Kline and Asker (2001) wrote about the need to rethink the links between the funding of public higher education with accountability measures based on student-choice processes and student outcomes. Student outcomes are now considered the principal mechanism for determining successful performance of higher education institutions (HEIs). No longer are external reviews merely interested in the traditional input data; the focus is for input and throughput mechanisms to enhance and maximise student learning opportunities within the campus for traditional as well as the increasing number of non-traditional students. Description of programmes is not enough – universities have to know more about their product (Cronbach, 2000).

Student-choice processes place a greater emphasis on student-based information to identify needs in order to figure out how to better serve those needs. Student feedback is desirable because it helps determine student satisfaction with their interaction with the different elements of the university and measure the extent of active engagement by the students in terms of curricular and co-curricular programmes. The hope is that institutions will: (a) retain their students until graduation, (b) generate student learning that is attractive to potential employers or graduate programmes, and (c) enhance student loyalty as alumni to provide the university with funds, participative support, and feedback that allows the institution to provide a value-added experience while on campus and afterwards.

Traditional student information included in the application package (grades, entrance exams, personal information) provides only information specific to acceptance of students to HEIs and subsequent placement into residence halls, support programmes and/or courses (remedial, honours or advanced level). Grades alone are also insufficient to fully gauge student contentment with what they are getting out of their university experience. Thus, there is a need to know more about students and their satisfaction with their experience. From 1966 onward, the Cooperative Institutional Research Programme (CIRP), a national longitudinal study of the higher education system in the USA, has been regarded as a key source of comprehensive data on incoming and continuing students. An additional layer of student data has become available from 2000 onward through the National Survey of Student Engagement (NSSE), which annually documents on a national basis student participation in programmes and activities HEIs make available for learning and personal development. According to Kuh (2001), NSSE allows for the creation of national benchmarks of good practice for universities to use to measure their improvement efforts relative to academic concerns along with supportive campus environments.

The point of student feedback is to systematically use what had been collected informally for many years, such as information from alumni (cf. Braskamp and Ory, 1994), and formalise it so as to make it a meaningful component of institutional decision-making. This systematisation of data collection and the focus on incoming, current and former students (hopefully graduates) is shaped by the current reality of the nascent environment that Slaughter and Leslie (1997) called academic capitalism, an environment in which academic and professional staff have to navigate in more highly competitive situations driven by market-like behaviours.

Types of student feedback can be broken down into distinct categories (Table 2.1). Taken together, the model that emerges is a feedback process that has a striking resemblance to Brocato and Potocki’s (1996) customer-based definition of student and quality of instruction in contrast to the more traditional notions of academic endeavour and quality:

Table 2.1

Different types of student feedback collected by HEIs

image

image The student’s education is the product.

image The customers for this product are the students, families, employers and other academic staff.

image The student definition of quality is that education which meets student expectations.

Quality focuses on two process-related questions: ‘What is wanted?’ and ‘How do we do it?’ (Straker, 2001). The inclusion and significance in the use of student feedback is an example of how quality management and assessment are more widely accepted today (Kitagawa, 2003), even if there are ‘fundamental differences of view of the appropriate relationship that should be established between higher education institutions and their external evaluators’ (European Association for Quality Assurance in Higher Education, 2005: 11). This is why the Malcolm Baldrige National Quality Award Education Criterion 3.1(b) 2 asks educational institutions how they:

… build and manage relationships with students and STAKEHOLDERS to

image acquire new students and STAKEHOLDERS;

image meet their requirements and exceed their expectations at each stage of their relationship with you; and

image increase their ENGAGEMENT with the [institution]?

(Baldrige National Quality Program, 2009: 13)

And although voluntary regional accrediting bodies in the USA have not fully adopted the Baldrige criteria as the blueprint for external reviews, one can see a more detailed tactic in the New England Association of Colleges and Universities (NEASC) (2006) standard 4.50:

The institution uses a variety of quantitative and qualitative methods to understand the experiences and learning outcomes of its students. Inquiry may focus on a variety of perspectives, including understanding the process of learning, being able to describe student experiences and learning outcomes in normative terms, and gaining feedback from alumni, employers, and others situated to help in the description and assessment of student learning. The institution devotes appropriate attention to ensuring that its methods of understanding student learning are trustworthy and provide information useful in the continuing improvement of programs and services for students. (p. 12)

Student feedback in the evaluation of academic programmes and instruction

Student evaluation of courses or units and of instruction has been used at HEIs for many years. As far back as 1949, Guthrie asserted that teaching is best judged by students as well as by colleagues. Thus, in the USA, by 1994, 98 per cent of respondents indicated a systematic student evaluation of classroom teaching is occurring at their campuses, with the other 2 per cent indicating that their institutions are considering it (Glassick et al., 1997). In Australia, the Course Experience Questionnaire was fully implemented in 1993 and continues to be used to this day for the purpose of allowing for a comparison of programmes among Australia’s universities. Meanwhile, especially under the re-engineering of higher education occurring in Europe under the Bologna Process, most European universities are instituting some sort of student evaluation of teaching – with the emphasis seeming to be in the form of student satisfaction (cf. Wiers-Jennsen et al. 2002).

The remainder of this chapter is dedicated to discussing relevant issues and concerns regarding student feedback within the framework of evaluating classroom instruction. To say that this is a controversial topic is an understatement. This is why Aleamoni’s (1999) literature review discusses and then rebuts many of the myths surrounding the use of student feedback as a means of evaluating instruction and instructor performance. What will come through is the challenge of framing the process and use of student feedback in relation to the job performance of instructors. A continuum seems to be developing. One side is represented by Arreola et al.’s (2003) meta-professional model for faculty, while on the other side is the service quality instrument – SERVQUAL-which others have made applicable to higher education first developed by Parasumaran et al. (1985, 1988). The rationale behind SERVQUAL is to measure consumers’ perceptions of quality when there is an absence of objective measures. This approach, as can be seen, easily aligns with many of the concepts driving the need for student feedback.

Case study

Seashoal Lows University (SLU) is a medium to large, urban comprehensive teaching-focused university of about 8,000 undergraduate and graduate students majoring in the arts and sciences and professional programmes in business, counseling psychology, education, and human and health services (a Master’s Large Comprehensive institution under the current Carnegie Classification Index). It has been using student evaluations of its instructors for a number of years. However, the faculty senate, the faculty collective bargaining unit, and individual faculty have been complaining over the appropriateness of how this is used and the purpose behind it. Student evaluations of faculty (SEFs) are externally created instruments given either through paper and pencil instruments or online. These are given near the end of the academic term. Problematic to the faculty and its related university organisations is that these instruments have become the primary element in deciding faculty promotion and tenure. According to the Agreement faculty have with SLU, the review process should be based on a portfolio provided by the instructor going up for review that includes publications, conference presentations, external and internal funding awards (when applicable), course syllabi and materials, classroom observations by peers selected by the applicant and the administration, observations by the supervising administrator, student evaluation of faculty results, documentation of university and community service, and external reviews of the portfolio by individuals agreed to by the faculty member and the head of the academic unit. Decisions are made by a committee within the School, with recommendations given to the Dean who, in turn, makes a recommendation to the Provost and President.

There are two principal types of faculty: faculty who have asked to be considered for their research as well as instruction; and those who want to be primarily considered for promotion and tenure based on their instruction (with criteria ostensibly following the suggestions put forth by Ernest Boyer (1990) regarding the scholarship of teaching). Programme-level accreditation at times provides guidelines for promotion and tenure for certain programmes while the university’s overall criteria are purposefully kept nebulous to avoid potential litigation from those denied promotion and/or tenure.

The issue is that in practice, the main point of evidence has become the student evaluation of the applicant when it comes to instruction. It has become apparent that committees and responsible administrators weigh student evaluations disproportionately when compared to course syllabi and materials or classroom observation by peers and administrators. Faculty complaints range from inappropriateness of the instrument, because it is not linked or validated to institutional norms of good teaching, to the belief that student feedback really represents a popularity contest because they may not be the best judge of content. Some also are concerned that the time when these are given ties student observations to their idea of what their grade should/will be and, moreover, there are complaints that there is no real instructional support for faculty because there is no formal capacity to assist faculty – especially junior faculty – in improving their instruction.

SLU’s administration does not want to change the student evaluation process because they are concerned that the time and cost taken to create and validate an in-house instrument may adversely impact accreditation-related data analysis and reporting. They also like the idea that they can use the external instrument in order to compare instructor ratings with other institutions. Finally, the administrators believe that the external instrument is validated and, as a result, the instruments do provide an accurate evaluation of faculty, more so than the other factors that can be influenced through personalities and politics.

SLU realistically does not use the data collected for benchmark analysis although it can. It does not use the data collected for continuous improvement purposes or to at least check for problems with instruction. However, the University’s administration is mindful of the change in the external regulatory environment that identifies with the student as a consumer and wants to maintain and enhance responsiveness to student needs and expectations. Therefore, there is an impasse within the institution. One side discounts the merit of student feedback and at best is resigned to have to live with it. The other side sees student feedback as a way of maintaining its regulatory compliance requirements and finds that its ability to quantify instruction provides a more compelling measure to make career decisions.

Issues

There are seven issues the scenario brings up:

image the role of student evaluation of instruction in staffing decisions (continuation, promotion, tenure) as distinguished from programme/unit performance;

image the weight given to student feedback in staffing decisions;

image the appropriateness of instrument used to evaluate courses within a programme/unit and instruction;

image how the data are used for analysis purposes (institutional and individual performance);

image the decision-making processes used surrounding the creation, content, validation, administration and use of the instrument;

image the resources available to support improvement activities as a follow-up to data results;

image the need for a careful and clear link between HEI staffing decisionmaking processes and the role of student feedback (including contractual agreements between the university and collective bargaining unit).

These have to be considered as a whole rather than as separate components because these are all interrelated. Regardless of approach taken, whether it is from a more traditional professional development model or a customer service model, there are legal implications to consider along with fitness of purpose, morale and buy-in concerns from all university employees and students themselves. Employees are concerned about job security and fairness. At play are the strategies individuals use as part of their sense-making process to determine what is needed for them to succeed at the HEI (cf. Weick, 1995). Apart from these aspects, it is important not to forget that students pay attention to the extent their information is actually considered and followed-up on, which in turn impacts their future engagement.

Probably the most important consideration is to identify and clearly articulate the reasons for and use of student feedback on instruction. For example, the American Council on Education and the American Association of University Professors (2000) jointly agree that stated criteria for tenure apply in actual practice. The extent to which these instruments are used for hiring/firing and promotion purposes may differ throughout the world. In many countries, feedback instruments are strictly utilised to measure institutional performance of academic units. Nevertheless, if the data reflect weakness at the individual level, individuals can be identified to receive additional support. Anecdotally, some HEIs identify the bottom 10 per cent of performers to give them training to improve their instruction.

The evaluation of academic staff performance reflects the role they play at their particular campus. Traditionally, the roles are divided into the triad of research, instruction and service, often in this order of importance. The roles of academic staff at a particular institution ought to be clearly defined and understood. It helps to know the criteria or standards for continuity and promotion. Review mechanisms and feedback have to be linked to these criteria or standards. Techniques and instruments need to be reliable and show practical and/or statistical validity. Techniques and instruments have to be weighted as part of the codification of value process relative to specific performance (Arreola, 2007). Techniques and instruments have the challenge of needing to overcome, as much as possible and feasible, Birnbaum’s (1988) characteristics of an anarchic university (unclear goals, imprecise technology, fluid participation, and solutions looking for problems). And, in addition, there should be an institutional support element at the campus-wide or unit level such as a centre of teaching and learning (cf. Padró, 2010; Institutional Management in Higher Education, 2007) to ensure improvement either in remedial or enhancement modes.

Two questions of role and use of student feedback in the area of instruction emerge, the one area of the triad where they are in a position to provide useful observations. The first question is: What is the role of student evaluations of instruction? Formative – focusing on diagnostics and continuous improvement – or summative in scope leading toward staffing decisions? Pallett’s (2006) observations provide an answer:

Regrettably, the diagnostic value of student ratings is often not realized, even though many faculty developers and teaching improvement specialists attest to their potential. There are at least three reasons for this. First, there is so much emphasis on the summative component of student ratings … that what can be learned to improve teaching is often overlooked … Second, useful, valid and reliable student ratings forms are difficult to create. This is especially true in developing a form that can truly support improvement efforts. Third, for real gains in teaching skills to occur, support and mentoring needs to be provided. While those making personnel assessments should not be precluded from guiding improvement efforts, others not involved in the evaluation process also need to be available. (pp. 51–52)

The second question is: What is the best type of instrument for students to use in giving feedback about courses and instruction? There are three approaches. The first one is for an HEI to develop its own instrument. This requires careful construction based on the institutional values of teaching, followed by appropriate reliability and validity studies that can take a few years to complete. Arreola (2007) described a ten-phase process for creating these forms:

Phase 1. Determine the issues to be measured by the form.

Phase 2. Write or select the items.

Phase 3. Develop appropriate response scales.

Phase 4. Conduct field trials to gather the data needed for subsequent validity and reliability determination.

Phase 5. Conduct a factor analytic study.

Phase 6. Develop subscales based on the result of the factor analysis.

Phase 7. Refine the form.

Phase 8. Establish norms.

Phase 9. Organize the items.

Phase 10. Implement the student rating system. (pp. 114–116)

Braskamp and Ory (1994) classify common forms of assessment of academic staff into three categories:

image Omnibus form: fixed set of items, administered to students or participants in all classes, workshops, etc. given by the HEI;

image Goal-based form: students rate their own performance or progress on stated course goals and objectives; currently, learner outcomes;

image Cafeteria system: bank of items from which academic staff can select those considered most relevant for assessing one’s own course.

This typology suggests HEIs carefully consider the options available under this action plan. Regardless of type, without paying attention to the reliability and validity part of the process, questions arise regarding how good the data are and their usefulness. In addition, if staffing considerations are proved to be significantly impacted by this instrument, then there may be tortuous litigation ahead.

The second approach to utilising student feedback instruments is to use commercially available or ‘off-the-shelf’ forms created and sold by a private firm. This is the track represented in the case study.

Arreola (2007) does indicate that it may be a good idea to consider adopting or adapting a professionally-developed form because many locally-generated student feedback forms may not possess the necessary psychometric qualities of reliability and validity. These instruments have already been tested for reliability and validity. In addition, the use of such instruments allows for national comparisons between comparable institutions using that particular survey. Comparative data can be used for improvement reasons, personnel grounds and marketing/rating data that students and parents can use in making choices on which university to attend, and documenting performance of programmes and units as part of benchmarking exercises. A key aspect to deciding on whether or which instrument to select is to determine which one provides the university data that are meaningful to it based on how the data will be utilised.

Critics of using these commercial instruments have numerous concerns, ranging from appropriateness of the concept of student feedback itself to alignment of items/questions to institutional normative references of instruction to how data are interpreted and subsequently used. Some of these concerns echo what detractors think about home-grown forms. Ramsden’s (1991) research using the literature review by Marsh (1987) and others makes the case for an association between student learning and student perception of teaching. Aleamoni (1999) takes a look at these characteristic arguments against the use of student feedback from the lens of studies looking at these issues. What Aleamoni found was that most of the contentions do not hold based on available findings. For example, he countered the complaint that ‘Students cannot make consistent judgments about the instructor because of their immaturity, lack of experience, and capriciousness’ with evidence dating as far back as 1924 indicating just the opposite. ‘The stability of student ratings from one year to the next resulted in substantial correlations in the range of 0.87 to 0.89’ (p. 153). Aleamoni also countered another typical complaint, that ‘Student rating forms are both unreliable and invalid’, by arguing that ‘ [W] ell-developed instruments and procedures for their administration can yield high internal consistency reliabilities … in the 0.90 range’ while the vast majority of studies regarding instrument validity indicated the ‘existence of moderate to high positive correlations’ (p. 155).

The third approach toward what student feedback instrument(s) to use may not be a choice at all, because institutions are to use or adapt a national instrument required by national protocols or a national quality assurance process (cf. Institutional Management in Higher Education, 2007). The example in mind is the course experience questionnaire utilised in Australia, an approach the popularity of which is expanding to other countries, reflecting Ewell’s (2002) observation that assessment and accountability have become entwined. Linking accountability with the assessment of learning means, at one level, a change in the conceptual paradigm of assessment from an evaluative stance to that of assuming active and collective responsibility for learning while, at another level, it suggests the evolving of a learning organisation. The basis for this thinking is using a national instrument as part of a quality teaching framework in which students play a formal role, as is evolving in Europe (ENQA, 2005) and elsewhere in countries establishing their own national quality assurance system. According to the OECD:

Students can collaborate with teachers and leaders in the definition of the initiative (and of the quality teaching concept itself), keeping the interaction alive and raising concerns about teaching, learning environments, quality of content and teacher attitudes. They can best contribute if invited to serve on governing bodies or used as evaluation experts on par with academic reviewers. (Institutional Management in Higher Education, 2007: 75)

In reviewing the literature and practices surrounding the use of student feedback for quality assurance purposes in the area of instructional performance of individuals, units and whole HEIs, what becomes apparent is that there is a continuum developing. One extreme emphasises professional development based on carefully structured multiple processes with adequate support mechanisms that help individuals succeed in their roles within the triad of research, instruction, and service. The other extreme is a strict customer service model based on documenting performance in instruction for institutional success, utilising comparative analyses so that different stakeholders can use this decision for informed decision-making (e.g. personal choice of institution, national policy determination of success and alignment to national intellectual/human capital needs). Below is a discussion of the models that exemplify the two extremes of the continuum.

The meta-profession model of faculty

‘Faculty performance is complex and dynamic’ (Braskamp and Ory, 1994: 22). Its evaluation should reflect the complexity of faculty work framed within clearly communicated institutional goals and expectations. The meta-profession model is selected as a highly systematised approach toward the evaluation of faculty that reflects these points. What this perspective represents is a review process that maintains the traditional focus of evaluating academic staff using the broader brushstrokes of competencies that cannot be defined only in terms of customer satisfaction.

Arreola et al. (2003) extended Boyer’s (1990) effort at making the professoriate and others rethink the role of academic staff at universities. Currently, academic staff ‘must perform at a professional level in a variety of roles that require expertise and skills in areas that often extend beyond the faculty member’s specific area of scholarly expertise’ (Arreola et al., 2003: 1). Arreola (2007) has taken this model and developed a comprehensive evaluation system in which student feedback on instruction is one of numerous approaches to collecting data. Student ratings are useful under the right conditions which, according to Aleamoni (1978, as cited in Arreola, 2007), is when they are used as part of a personal consultation between the instructor and a faculty development resource person.

The model is a multi-dimensional one, beginning with base professional skills that newly hired academic staff have. These include content expertise, techniques for keeping current in the field, practice and/or clinical skills appropriate to the field, and research skills and techniques appropriate to the field. However, these skills are insufficient when discussing instructional duties and other activities academic staff typically perform (institutional service, including potential administrative duties, and research aligned to institutional expectations). Arreola’s (2007) approach toward evaluating performance that includes all of these additional layers of expertise has eight steps:

1. Determine the faculty role model.

2. Determine the faculty role model parameter values – codify the priorities and values relative to the role faculty play.

3. Define the roles in the faculty role model – clearly define each role in terms of specific activities that allow for performance measurement.

4. Determine role component weights.

5. Determine appropriate sources of information.

6. Determine information source weights.

7. Determine how information should be gathered.

8. Complete the system – select/design/build the various tools necessary to gather the information needed to conduct the evaluation.

SERVQUAL

For the purposes of this chapter, SERVQUAL represents a line of thinking in which student feedback is primarily for the purpose of determining the quality of educational programmes from a student or other external stakeholder perspective. SERVQUAL looks at ten variables to assess service quality fit: tangibles, reliability, responsiveness, communication, credibility, security, competence, courtesy, understanding/knowing the customer and access.

SERVQUAL is not without its detractors due to applicability to different service industries because of a lack of completeness in measuring certain aspects of service quality (Chiu and Lin, 2004; Sureshchandar et al., 2001). Others have methodological (Smith, 1995) and empirical concerns (Van Dyke et al., 1997). Thus, Carr (2007) proposes SERVPERF (service through the lens of organizational fairness) as an alternative. However, Bayraktaroglu and Atrek (2010) find that SERVQUAL as well as another similar model, SERVPERF (service performance), can be used in measuring service quality in higher education services. And in their use of SERVQUAL, Emanuel and Adams (2006) found that the dimensions of reliability (the instructor’s ability to instruct the course dependably and accurately) and responsiveness are the most important dimensions of instructor service to students.

Using SERVQUAL assumes that universities are a service industry. For example, Schneider et al. (1994) conceptualised teaching as a service ‘in that (a) teaching processes and experiences are relatively intangible; (b) teaching is typically produced, delivered, and consumed simultaneously; and (c) teaching typically requires the presence of customers’ (p. 685). ‘SERVQUAL assumes that the difference between the customer’s expectations about a service and his or her perceptions of the service actually determines quality’ (Bayraktaroglu and Atrek, 2010: 47). According to Parasumaran et al. (1985), it is more difficult to understand and evaluate the impact of services because they are intangible, heterogeneous and inseparable. The model distinguishes between service quality and satisfaction. ‘[P]erceived service quality is a global judgment, or attitude, relating to the superiority of the service, whereas satisfaction is related to a specific transaction’ (Parasumaran et al., 1985: 16). What is at play is a ‘disconfirmation paradigm’ that suggests that, prior to an interaction, consumers form expectations about ensuing product/service experiences (Prugsamatz et al. 2007). Figure 2.1 below illustrates how the customer service model focuses on sources of complaints and how failure can be traced back to reduce similar negative incidents in the future.

image

Figure 2.1 Root causes of customer failures Source: Tax et al. (2006: 30).

Discussion

In 1974, Hartnett and Centra found that there tends to be a good deal of consensus between administrators, faculty (academic staff) and students about the academic environment found at a university. The principal exception found was in how students differed in their views about democratic governance, with lesser variation found regarding the institution’s concern for innovation and the overall extent of staff morale.

Glasser (1998), in his analysis of quality in education for education in the twenty-first century, argues that students are asked to perform tasks in order to be evaluated, creating reviewable work that, in turn, allows them to recognise quality in the classroom. ENQA (2005) guidelines for HEI quality assurance systems include, among other items, student satisfaction with programmes and teacher effectiveness.

The challenge is to ensure these are mutually compatible in terms of improving student learning while enhancing professional development opportunities rather than punitive personnel decision making. Aleamoni (1999) warned of the potential misuse of student ratings due to misinterpretation and misuse by administrators for punitive personnel decisions. This warning could be extended to the use of data for quality assurance purposes as well. Embedded within this challenge is the balance between academic freedom in contrast to the need for conformance. Burgan (2006) suggested that variety of instruction is informative to students and universities. Conversely, UNESCO defined student evaluation of teachers as the determination of the need for conformance ‘between student expectations and the actual teaching approaches of teachers’ (Vlasceanu et al., 2007: 93) Programmes such as the Tuning Project being performed in Europe and Latin America – whose purpose it is to establish a methodology for designing/redesigning, developing, implementing and evaluating different study programmes (González and Wagenaar, 2008) – are concerned with conformity leading to disengagement of academic staff. This is because their interests (and those of the disciplines they represent) would become marginalised when compared to policy steering and satisfaction interests.

‘New research shows that outstanding performance is the product of years of deliberate practice and coaching; not of any innate talent or skill’ (Ericsson et al., 2007: 115). Contrary to a potential reading of comments in this chapter, the models being put forward at the international level regarding the evaluation of instruction understand this and insist that adequate support for improving instruction from the professional development and enhancement of student learning perspective be provided. What the role of student feedback brings to the debate of institutional quality is where the emphasis lies; particularly in regard to institutional autonomy when it comes to disciplinary interests, freedom of expression or inquiry, and pedagogical matters. Another way of looking at this is the potential that the sharing of conclusions happens because there is a preference to convene around failures rather than successes (cf. Darling et al., 2005).

This chapter is written primarily from an ‘American’ perspective. Observations are based on how this perspective fares in relation to international practice. The author first became aware of the development of a continuum in the practice of evaluating instructors when listening and reading approaches taken by academic staff in Europe to establish their own student evaluation forms. Some of these individuals used the SERVQUAL model because of a lack of awareness of long-standing models and methods for evaluating instruction and student ratings.

Writers such as Wiers-Jennsen et al. (2002) document how academic staff and institutions in Europe have been looking for workable models of instruction evaluation. The continuum represents the divergence that exists based on demands placed on the institution and staff. Student feedback is not the problem. What is pressing is the role student feedback plays in the form of student ratings and evaluation of faculty and how the data are going to be used. There are other issues that are not discussed here because they are important topics in their own right (e.g. electronic forms and the ability for students to see the impact of their feedback). What is the priority for academic staff? Is it an education that leads to jobs or developing critical thinking skills and aesthetic and ethical abilities that lead to needed social skills (e.g. Institutional Management in Higher Education, 2007) or is it research? Arreola and his colleagues have developed a conceptual model and a process for the instruction of evaluation that reflects the nuance of academic work. Is a customer service model sufficiently sophisticated and does it adequately align student feedback to ensure that all aspects of academic work are properly documented and enacted upon in a manner beneficial to academic staff, students, HEIs and nations? These are the questions which still remain to be answered.

References

Aleamoni, L.M. The usefulness of student evaluations in improving college teaching. Instructional Science. 1978; 7:95–105.

Aleamoni, L.M. Student rating myths versus research facts from 1924 to 1998. Journal of Personnel Evaluation in Education. 1999; 13(2):153–166.

American Council on Education, American Association of University Professors, United Educators Insurance Risk Retention Group. Good practice in tenure evaluation: Advice for tenured faculty, department chairs, and academic administrators. Washington, DC: American Council on Education; 2000.

Arreola, R.A. Developing a comprehensive faculty evaluation system: A guide to designing, building, and operating large-scale faculty evaluation systems, 3rd ed. San Francisco: Anker Publishing; 2007.

Arreola, R., Theall, M., Aleamoni, L.M., Beyond scholarship: Recognizing the multiple roles of the professoriate. Paper presented at the 2003 AERA Convention, April 21–25, Chicago, IL., 2003.

2009–2010 Education Criteria for Performance Excellence. Author, Gaithersburg, MD, 2009.. http://www.baldrige.nist.gov/PDF_filesf2009_2010_Education_Criteria.pdf [Available online at:, (accessed 5 January 2010).].

Bayraktaroglu, G., Atrek, B. Testing the superiority and dimensionality of SERVQUAL vs. SERVPERF in higher education. Quality Management Journal. 2010; 17(1):47–59.

Birnbaum, R. How colleges work: The cybernetics of academic organization and leadership. San Francisco: Jossey-Bass; 1988.

Boyer, E.L. Scholarship reconsidered: Priorities of the professoriate. Princeton, NJ: Carnegie Foundation for the Advancement of Teaching; 1990.

Braskamp, L.A., Ory, J.C. Assessing faculty work: Enhancing individual and institutional performance. San Francisco: Jossey-Bass; 1994.

Brocato, R., Potocki, K. We care about students … one student at a time. Journal for Quality and Participation. 1996; 19(1):74–80.

Burgan, M. What ever happened to the faculty? Drift and decision in higher education. Baltimore, MD: Johns Hopkins University Press; 2006.

Carr, C.L. The FAIRSERV Model: consumer reactions to services based on a multidimensional evaluation of service fairness. Decision Sciences. 2007; 38(1):107–130.

Chiu, H.-C., Lin, N.-P. A service quality measurement derived from the theory of needs. The Service Industries Journal. 2004; 24(1):187–204.

Cronbach, L.J. Course improvement through evaluation. In: Stufflebeam D.L., Madaus G.F., Kellaghan T., eds. Evaluation models. Viewpoints on educational and human services evaluation. 2nd ed. Dordrecht: Kluwer Academic; 2000:235247.

Darling, M., Parry, C., Moore, J. Learning in the thick of it. Harvard Business Review. 2005; 83(7):84–92.

Emanuel, R., Adams, J.N. Assessing college student perceptions of instructor customer service via the Quality of Instructor Service to Students (QISS) Questionnaire. Assessment and Evaluation in Higher Education. 2006; 31(5):535–549.

Ericsson, K.A., Prietula, M.J., Cokely, E.T. The making of an expert. Harvard Business Review. 2007; 85(7/8):114–121.

European Association for Quality Assurance in Higher Education (ENQA) Standards and guidelines for quality assurance in the European Higher Education Area. ENQA, Helsinki, 2005.. http://www.bologna-bergen2005.no/Docs/00-Main_doc/050221_ENQA_report.pdf [Available online at:, (accessed 28 February 2010).].

Ewell, P.T. ‘An emerging scholarship: a brief history of assessment’. In T.W Banta and Associates (eds), Building a scholarship of assessment, pp. 3–25. San Francisco: Jossey-Bass; 2002.

Glasser, W. The quality school: Managing students without coercion, Revised ed. New York: HarperPerennial; 1998.

Glassick, C.E., Huber, M.T., Maeroff, G.I. Scholarship assessed: Evaluation in the professoriate. San Francisco: Jossey-Bass; 1997.

González, J., Wagenaar, R. Universities’ contributions to the Bologna Process: An introduction. Universidad de Deusto, Bilbao, 2008.. http://tuning.unideusto.org/tuningeu/ [Available online at:, (accessed 16 July 2009).].

Guthrie, E.R. The evaluation of teaching. Educational Record. 1949; 30:109–115.

Hartnett, R.T., Centra, J.A. Faculty views of the academic environment: Situational vs. institutional perspectives. Sociology of Education. 1974; 47(1):159–169.

Institutional Management in Higher Education. Learning our lesson: Review of quality teaching in higher education. Paris: OECD; 2007.

Kitagawa, F. New mechanisms of incentives and accountability for higher education institutions: Linking the regional, national, and global dimensions. Higher Education Management and Policy. 2003; 15(2):99–116.

Kuh, G.D. Assessing what really matters to student learning: inside the National Survey of Student Engagement. Change. 2001; 33(3):10–17. [66].

Lehtinen, U., Lehtinen, J. Two approaches to service quality dimensions. Service Industries Journal. 1991; 11:287–303.

Marsh, H.W. Students’ evaluations of university teaching: research findings, methodological issues, and directions for future research. International Journal of Educational Research. 1987; 11:253–387.

New England Association of Colleges and Universities, Commission on Institutions of Higher Education Standards for accreditation. Author, Bedford, MA, 2006.. http://cihe.neasc.org/downloads/Standards/Standards_for_Accreditation2006.pdf [Available online at:, (accessed 29 January 2010).].

Padró, F.F. University centers of teaching and learning: a new imperative. Quality and Participation in Higher Education Supplement of the Journal of Quality and Participation. 2010; 1(1):3–10.

Pallett, W. Uses and abuses of student ratings. In: Seldin P., Associates, eds. Evaluating faculty performance: A practical guide to assessing teaching, research, and service. Bolton, MA: Anker Publishing; 2006:50–65.

Parasumaran, A., Zeithaml, V.A., Berry, L.L. A conceptual model of service quality and its implications for future research. Journal of Marketing. 1985; 49(3):41–50.

Parasumaran, A., Zeithaml, V.A., Berry, L.L. SERVQUAL: a multipleitem scale for measuring consumer perceptions of service quality. Journal of Retailing. 1988; 64(1):12–40.

Pate, W.S. Consumer satisfaction, determinants, and post-purchase actions in higher education. College and University Journal. 1993; 68:100–107.

Pereda, M., Airey, D., Bennett, M. Service quality in higher education: the experience of overseas students. Journal of Hospitality, Leisure, Sport and Tourism Education. 2007; 6(2):55–67.

Prugsamatz, S., Heaney, J.-G., Alpert, F. Measuring and investigating pretrial multi-expectations of service quality within the higher education context. Journal of Marketing for Higher Education. 2007; 17(1):17–47.

Ramsden, P. A performance indicator of teaching quality in higher education: The Course Experience Questionnaire. Studies in Higher Education. 1991; 16(2):129–150.

Schneider, B., Hanges, P.J., Goldstein, H.W., Braverman, E.P. Do customer service perceptions generalize? The case of student and chair ratings for faculty effectiveness. Journal of Applied Psychology. 1994; 79(5):685–690.

Slaughter, S., Leslie, L.L. Academic capitalism: Politics, policies, and the entrepreneurial university. Baltimore, MD: Johns Hopkins University Press; 1997.

Smith, A.M. Measuring service quality: is SERVQUAL now redundant? Journal of Marketing Management. 1995; 11:257–276.

St John, E.P., Kline, K.A., Asker, E.H. The call for public accountability: rethinking the linkages to student outcomes. In: Heller D.E., ed. States and Public Higher Education Policy: Affordability, Access, and Accountability. Baltimore, MD: Johns Hopkins University Press; 2001:219–242.

Straker, D., What is quality? Part 1. Qualityworld. 2001. http://syque.com/quality_tools/articles/what_is_quality/what_is_quality_1.htm [Available online at:, (accessed 15 February 2010).].

Stufflebeam D., Madaus G., Kellaghan T., eds. Evaluation Models. Viewpoints on Educational and Human Services Evaluation (2nd ed.), pp. 236–247. Dordrecht: Kluwer Academic, 2000.

Sureshchandar, G.S., Rajendran, C., Kamalabanhan, T.J. Customer perceptions of service quality: a critique. Total Quality Management. 2001; 12(1):111–124.

Tax, S.S., Colgate, M., Bowen, D.E. How to prevent your customer from failing. Sloan Management Review. 2006; 47(3):30–38.

Van Dyke, T.P., Kappelman, L.A., Prybutok, V.R. Measuring information systems service quality: concerns on the use of the SERVQUAL questionnaire. MIS Quarterly. 1997; 21(2):195–208.

Vlasceanu, L., Grünberg, L., Parlea, D. Quality assurance and accreditation: A glossary of basic terms and definitions. Bucharest: UNESCO; 2007.

Weick, K.E. Sensemaking in organizations. Thousand Oaks, CA: Sage; 1995.

Wiers-Jennsen, J., Stensaker, B., Grøgaard, J.B. Student satisfaction: toward an empirical deconstruction of the concept. Quality in Higher Education. 2002; 8(2):183–195.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset