Chapter 2
School Self-Evaluation

Patterns of emerging practice

James, M. (1987) ‘Self- initiated Self- evaluation’. In P. Clift, D. Nuttall and G. Turner, (eds) Studies in School Self- evaluation (Basingstoke: Falmer Press): 172–189.

Introduction

This chapter gives an account of a review of self-evaluation activities initiated by teachers within their own schools, as distinct from school self-evaluations initiated by LEAs. The review was carried out at the Open University in 1981 and was first reported in a research monograph (James, 1982).

The idea for such a review came from an interest in discovering whether schools were, in fact, conducting self-evaluations on their own initiative. Whilst many educationists were advocating this form of self-evaluation as the least threatening, the most fair, and the most effective in terms of change, there was little evidence of what was happening in the field. Certainly there was no overview.

One possible explanation for this lack of information is very simple. Unlike LEA schemes that frequently culminate in the production of a public report, school-initiated self-evaluations can serve purely internal purposes. Thus, there may be no immediate need for either the process or its outcomes to be disseminated beyond the bounds of the school community. Also, many schools are sufficiently small or cohesive for the predominant mode of communication to be oral and there may seem to be no need for any written account.

With this in mind, the purposes of the review were conceived as twofold:

to bring to light any self-evaluation exercises that schools and teachers were conducting on their own initiative;

to examine these activities for shared characteristics and issues and consider their possible significance.

In June 1981 an advertisement was placed in the Times Educational Supplement and letters were published in The Teacher and ILEA Contact. The Classroom Action Research Network (CARN) was also used to contact teachers, schools and colleges (activities in H.E. and F.E. were included in the original review). The request for information was phrased: ‘Have you or your school or college undertaken self-evaluation, self-assessment, self-monitoring or curriculum review; or do you know those who have? If so we would like to hear from you.’

Replies were subsequently received from about 200 individuals or institutions. Since they were self-selected they did not constitute a random or representative sample from which to establish typicality. They did, however, provide sufficient evidence to permit a description of the range and variety of recent experience, and the formulation of hypotheses about the organisation and practice of self-initiated self-evaluation.

Drawing the boundaries

The initial request for information did not impose any strict definitions of self-evaluation, self-assessment, self-monitoring or curriculum review, therefore individuals and organisations were able to interpret the inquiry according to their own meanings and understandings and send information that they considered appropriate. This enabled us to gain some purchase, phenomenologically, on the way in which these activities were defined by others. Teasing out interpretations forced us to reconsider what the bounds of the review should be, and what kind of information was most relevant. For instance, an interesting question was raised by those teachers who sent information about student self-assessment schemes. We had not anticipated this interpretation of our request because we had broadly conceived school self-evaluation as concerned with wider issues of curriculum and organization. It is perhaps significant, therefore, that some of the most interesting data we received about a school’s curriculum were embedded in an example of a student’s self-assessment report. It is interesting to speculate whether recent proposals for student Records of Achievement (DES, 1984) will become another vehicle for school self-evaluation.

A further problem of boundary definition arose when we came to consider the quantity of material that we had been sent by support agents: advisers, teachers’ centres and other INSET organizers. Although interesting, much of this was of a general nature and rarely described particular exercises in enough detail to be useful to us in this context. In the end we decided to concentrate on just 54 accounts of activities which, for the purposes of this chapter, I will call ‘self-initiated self-evaluations’. Each of these accounts described a particular initiative and the information came directly from schools, colleges and teachers who were, or had been, actively engaged in the processes they described.

Categories, dimensions and criteria

The methodology used to arrive at a comparative description of the 54 self-evaluations was essentially a content analysis of documentary evidence. Most schools and teachers had provided some written material whether in the form of school documents, letters or completed proformas. Notes had also been taken of a number of visits and telephone conversations. Although we confined our analysis to a relatively limited number of exercises the task took two of us several weeks to complete. Every teacher’s account arose out of a specific context and was unlike any other. There were no short cuts. We had to read all the material (a pile A4 by 15 inches!), generate a number of categories and dimensions, then read almost every word again and refine our analysis. At first the task was daunting and promised to be dull, but, when we accepted the degree of involvement it required, it became fascinating. In many cases we had, we felt, been given sufficient information to enter the culture of the school – at least partially. Even routine school documents convey a tone and style which speaks of the ethos of particular institutions. A powerful sense of the uniqueness of individual schools and classrooms remains perhaps our chief personal insight.

To help those teachers who requested further guidance on the kind of information we sought, we devised a proforma with a number of headings. These anticipated some of the categories we were eventually to use in our analysis but a number became virtually redundant and fresh ones were generated. For instance, our early preoccupation with research- type methodology gave way to a consideration of meetings, conferences, personal reflection and critical dialogue as a mode of evaluation.

Our data also generated questions concerning the covert source of initiatives and influence. It seemed less satisfactory to describe all 54 activities as school- or teacher- initiated; there were often outside influences that needed to be acknowledged. However, we have to admit that the way in which we finally analyzed the various exercises still left much to be desired. Our analysis was inevitably reductionist, and incapable of representing the nuances of meaning and relationship that we began to perceive when reading the material.

Clearly, the generation of categories and the classification of each activity depended largely on our subjective judgment. The only indisputable dimension for analysis was the designation of the activity according to educational sector, i.e. primary, middle, secondary, further, higher or special. Even here comparison was made difficult because the age range covered by a particular sector often varied according to local authority arrangements. Nevertheless, we ventured to identify a number of categories and dimensions which we organised in the following way:

Categories

Levels

In the first instance, it seemed prudent to classify activities according to the levels at which they principally operated, i.e. classroom, department or whole institution. This gave us our major categories. We define these categories according to the following criteria:

Institutional self- evaluation. This referred to whole school self-evaluation although it did not necessarily imply that all staff and all areas of the curriculum were evaluated, but that the activity was visible as a school activity. Activities that were initially department- or classroom-based, or concerned an issue that was ‘institution-wide’, were included if they formed part of a whole school programme.

Teacher self- evaluation. This included both the evaluation by teachers of their own classroom performance, and the evaluation of classroom interaction and learning processes and outcomes. The crucial feature was that the exercise was conducted principally by the teacher who had responsibility for the classroom on which the evaluation focused.

There was occasionally some overlap between categories. For instance we had evidence of whole school curriculum review and an individual teacher self-evaluation taking place simultaneously in the same school.

Sectors

Within each of the categories given above, it was possible to group together activities taking place in the same educational sector. Thus all activities were classified according to both sector and level, for example, secondary departmental self-evaluation, primary institutional self-evaluation, secondary teacher self-evaluation.

Dimensions

Each activity was also classified according to a number of dimensions that emerged in the process of analysis. The following are the headings we used and the criteria for deciding how self-evaluation could be described in terms of each dimension.

Initiatives

This heading was used to identify a ‘prime-mover’ or initial stimulus. The influence of this agent may not have been very great; an LEA checklist or the comment of an adviser might have stimulated a school to initiate an activity of this nature without imposing an obligation to do so. Thus the school may have made the initiative its own, i.e. in no way felt it was responding under duress. On some occasions a number of different individuals or groups contributed to establishing an initiative.

Three subgroups of initiators seemed to be identifiable:

the schools and/or teachers themselves;

LEAs;

others (for example, academics, teachers’ centres, Schools Council).

Involvement

This dimension related to those who were actually involved in the conduct of the evaluation:

senior teachers (senior and middle management);

teachers (all levels). This implies the active involvement of ordinary classroom teachers in the conduct of the evaluation (not just as informants);

LEA (for example, officers and advisers);

others (for example, academics, teachers’ centre wardens, governors, parents, pupils).

This classification appears to distinguish ‘insiders’ from ‘outsiders’. In relation to particular evaluation exercises this was not necessarily the case since evaluation at departmental, or classroom level, may involve others from within the institution (for example, from another department) in the role of an ‘outsider’.

Purposes

We were able to identify (to a greater or lesser extent) three kinds of purposes that self- evaluation served:

accountability. This could refer to the rendering of account either to those outside the schools, for example, parents, LEA; or within the schools, for example, teacher to HoD, HoD to Head – and vice versa;

professional development, including INSET activities;

curriculum and course review and development.

Inevitably these strands are interrelated; indeed some commentators would argue that some purposes are subsumed by others. However, our data suggested that schools stress certain purposes rather than others, so we thought it was worth trying to identify those emphases that emerged most strongly.

Organization

This dimension was the most difficult to label appropriately. Many of the terms we experimented with seemed value-laden, if not pejorative (depending on your view). The two we finally selected are not precise descriptions but they indicate an important distinction in the way these self-evaluation exercises were organized.

Rational management: In this form of organization self-evaluation was linked to the management structure of the school or department. For example, classroom teachers usually prepared reports for their HoDs, who then prepared reports for senior management. Information tended to flow upwards through a series of policymaking levels or committees, although not exclusively Sometimes procedures were sensitively devised to give protection to the least powerful and to allow some information to flow downwards.

Collegial: Exercises in this category were not obviously linked to the management structure of the schools. There was usually a deliberate effort to put management roles to one side and encourage full and equal participation by all those involved in the evaluation. This is not to deny that certain individuals took leadership roles. In some cases, it was quite clear that certain teachers – often our informants – had been more prominent than others. Nevertheless, there was an effort to ensure that the activity was conducted on a collegial basis and sometimes specific procedures were developed to protect the least powerful, or to facilitate information flow in all directions.

These definitions tend towards ‘ideal types’. As with all ‘ideal types’ they were rarely represented in reality, in any pure form. All we could do was to indicate what we considered the predominant organizational characteristics of self-evaluation.

Focus

This referred to the kinds of substantive areas on which various evaluation exercises concentrated. The dimension could be sub-divided in various ways but the following seemed appropriate:

Antecedents: Referring to Stake’s (1967) conceptual framework of data which might be collected by an evaluator, we used this heading to indicate a focus in the evaluation on preconditions rather than the actual processes of teaching and learning. The term encompasses both antecedent intentions (for example, aims, objectives, goals) and antecedent conditions, such as the nature of student intakes (for example, VRQs, socio-economic backgrounds), resources (for example, teacher qualifications and experience, curriculum materials) and the kinds of procedures that were assumed to be prerequisites of satisfactory educational transactions (for example, management and communication procedures). Curriculum review, for example, in terms of the HMI ‘eight areas of experience’ (DES, 1978), might be of this kind.

Processes: This description was confined to those processes that involve students and directly contributed to their educational experience (for example, student-student interaction, teacher-student interaction and the analysis of ‘on task’ activity). Processes could include transactions which took place outside the classroom (for example, those related to pastoral care), as long as they involved students directly.

Outcomes: Normally, this term refers to pupil learning outcomes (i.e. products) whether quantitatively or qualitatively described. Outcomes, in this sense, were not purely defined as relating to the cognitive domain (for example, acquisition of knowledge, intellectual skills or understanding); they could also be affective (for example, feelings, attitudes and values) or psycho-motor (for example, bodily coordination).

Methods

Two distinct approaches (which could be further subdivided) seemed to emerge from the data:

Meeting-based: This included evaluations conducted through staff meetings, working parties, conferences, courses, and in the case of the individual teacher working alone, through personal reflection.

Research-based: This described those exercises where there had been some formal and fairly systematic effort to collect and analyze ‘data’. Two particular research approaches were identified although many exercises could best be described as eclectic.

  1. Quantitative – data might include test results, results of public examinations and the use of interaction schedules.
  2. Qualitative – data here might include diaries, interviews, audio and video recordings.

These two research approaches seem to be associated with positivist and interpretative theoretical perspectives, respectively. However, it is unlikely that all schools and teachers could have articulated the theoretical rationale for the approach they adopted, and the eclectic mode suggests a lack of commitment to one, to the exclusion of the other. In other words, whereas we have evidence of methodology, we can only infer theoretical perspective.

Reports

After some consideration we rejected a written versus oral report dimension in favour of a description of audience. Thus audiences are simply described as:

internal, for example, self, colleagues within the institution;

external, for example, parents, governors, other colleagues or professionals outside the institution.

Comment

In retrospect it now appears rather odd that we did not identify a dimension of ‘action’ describing what happened as a result of exercises in evaluation – after all improvement of practice was an explicit or implicit aim of most, if not all. We can only attribute this omission to the fact that the data we gathered were mostly concerned with evaluation procedures, and rarely gave us details of changes in practice that took place subsequent to reporting. We should, of course, emphasize that most of the activities reported to us were still at the planning stage, in their infancy, or as yet incomplete. The irony of this omission must have struck teachers when we returned our accounts of their exercises for clearance, because a number included some description of subsequent action in their replies.

Discussion

Our first task was to describe each self-evaluation in terms of the categories and dimensions we had identified. After returning the 54 analyses to schools and colleges, we were given clearance of 52. Our next task was to search our data for recurring patterns, themes or issues. The first, and conceivably most important, thing we noticed was that no activity was exactly like any other. This had been our first subjective impression but it was reinforced by the fact that our codings of individual activities presented us with 52 different permutations. Throughout the following discussion this point needs to be borne in mind. Towards the end we present a typology of institutional self-evaluations although it is not our intention to diminish the essential uniqueness of each activity.

Distribution

First, a word about the distribution of our self-selected sample. By far the greatest number of activities reported to us had taken place in secondary schools (n = 31). A number of factors may account for this. The first is connected with the way we collected our data. Advertisements were placed in the educational press towards the end of the summer term, 1981. Schools and teachers with documents relating to their activities could send them to us without too much inconvenience, but those who needed to write up their work may well have felt that they could ill-afford the time at this busy period of the year. The bias in our sample may therefore be attributable to the fact that secondary schools, by virtue of their size, generally use the written mode of communication more frequently than smaller and more cohesive primary schools. On the other hand, it could be that the requirements of the 1980 Education Act, particularly those relating to the publication of examination results and curricular arrangements, were putting relatively greater pressure on secondary schools to evaluate their work – at least at an institutional level. It certainly seemed significant, though not in a statistical sense, that the preponderance of secondary sector activities reported to us were at the level of the whole school (n = 21). Like the primary sector, responses from the tertiary sector were relatively few (n = 8).

Geographical distribution of activities was similarly uneven with a distinct clustering around London, the Midlands and the western portion of East Anglia, and, to a lesser degree, in the south and south-west. Many of the activities in these areas had strong connections with higher education or INSET providers (for example, the universities of Aston, Birmingham, Bristol, East Anglia, London, Leicester Loughborough, Southampton, Sussex, Warwick, The Open University and the Cambridge Institute of Education). Here again the pattern may have been influenced by the way we collected data, because some of our contacts were made through our use of networks (CARN, for instance). Nevertheless, it is interesting to speculate whether schools with university INSET links are more inclined to respond to an inquiry from another university, or whether they genuinely lead the field in this kind of behaviour.

Initiatives

The link with H.E. and INSET providers is important for another reason. The assumption on which our research was based was that activities reported to us were ‘self’ (i.e. teacher or school) initiated. Our original inquiry stated: ‘we would like to document more fully the range of activities at classroom, department or whole school level arising from internally defined professional needs.’ The data we received suggested that initiatives were less clear cut than we had imagined. Some activities appeared to respond to initiatives elsewhere (for example, an LEA scheme in another authority or an HMI discussion paper); some were attempts to pre-empt the imposition of an external scheme by establishing an in-school initiative first; others were the response of individuals to the combined demands of self-evaluation and higher degree requirements. While it is probably true that all the activities recorded in the register were genuine attempts by schools and teachers to meet their own needs, there were indications that the exercises they conducted were also designed to respond to external pressures. Thus the suggestion of an LEA adviser, higher degree supervisor, or an INSET coordinator may have encouraged the development of what was still, basically, an internal initiative.

Organizational types

The strength of an approach to research that sets out to derive theory from practice, rather than to test a number of prespecified hypotheses, is that it increases the possibility that some things will be discovered that were not anticipated by the researcher at the outset. This was certainly true in our case. At the beginning of our investigations we were most interested in questions such as: What is being evaluated? How? For what purpose? As our work progressed we became increasingly aware of the importance of organizational structures. Our impression was that the answers to our earlier questions were in some way dependent on an answer to a question concerning the management of evaluation.

As pointed out above, we eventually identified two distinct organizational strategies: one we called ‘rational management’, the other we called ‘collegial’. One or two activities were in a stage of transition between the two organizational styles, but most had either a rational management style of organization or they were organized collegially. In other words, this dimension presented us with fairly clear- cut alternative organizational styles. Examining our data, we discovered that of our 52 examples we had identified a strong organizational style in 35. Omitting two teacher self- evaluations which were very much special cases, we noted that we had designated 16 institutional and departmental activities, representing all three educational sectors, as having a rational management form of organization. Seventeen others we had described as ‘collegial’. Hypothesizing that these two organizational styles might represent two types of evaluation, we next examined our data to see if there was a relationship between the organizational dimension and the others we had identified. Table 2.1 summarizes our analysis.

What emerged was interesting. From our small sample, it seemed that a rational management style of organization was more likely to be associated with evaluation conducted chiefly by senior staff (heads of department and above), primarily for purposes of accountability or curriculum review, focusing particularly on antecedent conditions (for example, aims, objectives, management structures), and using meetings and discussions as its main vehicle (usually discussions between heads of department and headteachers, or heads of department and their departmental staff).

Collegially organized evaluations, on the other hand, were more likely to be conducted by staff at all status levels, primarily for purposes of professional or curriculum development. Antecedents, processes and outcomes all became foci but relatively greater emphasis was given to educational processes. Meetings and discussions again featured prominently but qualitative research was also an important method.

As far as outside influences were concerned, it was interesting, and perhaps significant, that the universities of Birmingham, Warwick and Aston and the North West Educational Management Centre were associated with some activities of the rational management kind; while the Cambridge Institute of Education, the Centre for Applied Research in Education at the University of East Anglia, the London Institute of Education, the Universities of Bristol and Exeter and the College of St. Mark and St. John, Plymouth, were associated with collegially organized evaluations. Could this reflect the emergence of two distinct traditions of self-initiated self-evaluation? We suspect it does.

The two organizational forms of evaluation did however have something in common, apart from the importance they both attached to meetings and discussion. Consistent with the perception that initiatives were primarily stimulated by internal needs, was the identification of insiders as constituting the principal audience for any report. This suggests that any explicit accountability purpose should be interpreted as accountability to colleagues rather than accountability to parents, employers or political masters. Even so, the apparently strong relationship between evaluation for in-school accountability and a rational management style of organization suggests that accountability to colleagues involves, as Ebbutt (1981) has observed in a similar context, ‘the justification and explanations teachers owe to their superiors for how they earn their money and spend their time. As such, accountability is structurally hierarchical and reveals a bureaucratic relational, ultimately legalistic, aspect’.

The fact that there was little explicit mention of accountability in the context of collegially organized evaluation, does not necessarily imply that it was altogether absent. It has been argued elsewhere (Sockett, 1980; Open University 1982a) that if teachers are committed to the evaluation and improvement of their practice then they are being professionally accountable, albeit in an implicit way. In this case, however, the imperative is moral rather than legal-formal. It is, therefore, possible to argue that whether or not it had been consciously noted, an element of accountability (or responsibility) was present in all our examples. The difference is that the moral/implicit mode of accountability more usually characterizes evaluation organized on a collegial basis, whereas the legal-formal/explicit mode of accountability often characterizes evaluation of the rational management kind.

The East Sussex Accountability Project (1980) attempted a conceptual analysis of accountability and identified three facets: answerability to clients (moral accountability); responsibility to one self and one’s colleagues (professional accountability), and accountability in the strict sense to one’s employers or political masters (contractual accountability). Elliott (1980), reporting on the work of the Cambridge Accountability Project, took issue with this analysis and asked, ‘why can’t teachers feel answerable to each other and responsible towards parents?’, and ‘if these two attributes can’t be confined to different audiences, can we attribute strict accountability solely to contractual relations with employers?’ (p. 89). On the basis of our evidence I am inclined to raise the same questions as Elliott. The prominence of internal audiences for reports of self-evaluations would place most of our examples in the category of professional accountability procedures, according to the East Sussex classification. Thus answerability and strict accountability would be excluded. However, our evidence suggests that there is a strong element of strict accountability in some in-school activities: in others an implicit element of answerability or moral accountability to self and colleagues, as well as to parents and pupils. In my judgment, therefore, the East Sussex classification oversimplifies what is happening at school level because it links modes of accountability (moral, professional, strict) too tightly to particular audiences (clients, colleagues, employers). Moreover, it is surely not the case that moral, professional and strict accountability are necessarily mutually exclusive. I would argue, for instance, that professional accountability can possess both moral and legal-formal (strict) aspects. Thus, while it may be legitimate to describe all forms of accountability to colleagues as professional accountability at least two subvariants need to be acknowledged i.e. professional accountability of a moral kind, and professional accountability of a legal-formal kind. Certainly some such distinction would appear to be embedded in our examples.

The way we analyzed our data suggested that organizational style may be a key variable in school-initiated self-evaluations. This is at a practical or operational level. On the basis of this distinction I have already gone beyond the data to propose two types of accountability implicit or explicit in our examples. I want to continue this more theoretical discussion by proposing also that our two organizational types represent two contrasting sociopolitical conceptions of evaluation: one built on what sociologists call a positivistic or systems perspective, the other grounded on an interpretative or humanistic perspective.

In a paper on the evaluation of school science departments, Brown, McIntyre and Impey (1979) draw a distinction, between authority-based and responsibility-based evaluations:

We wish to present two contrasting sociopolitical conceptions of evaluation. One of these is based on the idea of authority, while the other is based on the idea of responsibility. According to the former conception, those in a position of authority make decisions about what ought to happen, communicate their prescriptions to the other people concerned, and subsequently evaluate the practices of those others by assessing the extent to which they conform to the predetermined ideal pattern; the criteria for the evaluation are based here on prescriptions for other people’s activities. According to the responsibility-based conception of evaluation, any individuals or groups, irrespective of their position, decide what state of affairs they want to bring about and plan how to achieve this state of affairs, attempt to implement their plans, and then evaluate the outcomes of their actions in terms of the extent to which their goals have been attained; the criteria for the evaluation are based here on the plans for which one is oneself responsible.

(Brown, McIntyre and Impey, 1979, p. 183)

This distinction seems to encompass both the operational/organizational distinctions proposed earlier. However, whether the examples we were given were authority-based or responsibility-based is a matter of some speculation; we could not deduce this from our data with any degree of certainty. In other words to say that collegially organized self-evaluations were responsibility-based, and rational-management evaluations were authority-based is a matter of inference – it is not self-evident. Moreover we can only usefully employ the authority-based/responsibility-based distinction if we acknowledge a further caveat. Brown, McIntyre and Impey seem to assume that the character of an initiative and the style of its operation are the same. This need not necessarily be so. For instance, a headteacher may have initiated a self-evaluation but then left his or her staff to conduct the activity as they saw fit. This is certainly true of one activity at Stantonbury Campus (Open University, 1982b). In such cases the authority/responsibility distinction is far from clear-cut; an observation which serves to remind us that the data of real examples rarely conform to the neatness of theory.

Bearing these important qualifications in mind, the authority/responsibility distinction is still useful and I would like to develop it further by proposing a tentative typology of insider evaluations at institutional or departmental level. The two forms proposed are not exactly the ‘ideal-types’ because they do not represent opposite poles in terms of their dimensions. Moreover, since they have been generated from our data the relationships between various dimensions and constructs are empirical rather than logical.

Throughout this discussion I have emphasized that although our data went some way towards supporting this typology we had no evidence of activities which represented our ‘types’ in pure form. Most of our examples were some kind of amalgam of the two. This is not surprising since, by their nature, typologies and models cannot take account of particular circumstances, needs, pressure or contexts. However, insofar as our 52 evaluations ‘approximated’ to one type rather than another, and there seemed to be sufficient evidence for suggesting that they do, these two formulations were a useful way to begin to make connections.

Teacher self-evaluation

All that has been said about organizational style is, of course, almost totally irrelevant to teacher self-evaluations. Unless teachers involve their professional colleagues as ‘outsiders’ in relation to their own activities, management of others does not feature strongly. Add to this the fact that we received only eight descriptions of individual exercises, then we can say little about this group with any confidence. Once more the small number of responses we received may be attributable to the way in which we collected our data. Our request invited written accounts, but teachers, who are their own audience when self-evaluating, may feel little compulsion to commit their work to paper. Thus writing a report especially for us imposed an additional task.

Nevertheless, even with so small a sample, we were able to note that all eight teachers were interested in using evaluation as a means to improving their professional practice. Only one had any explicit accountability purpose in mind, and this only marginally. What was also interesting was that all but one had an external audience in addition to themselves. For five of the eight this included those who would examine the higher degrees that they were pursuing at the time. One wonders what incentive there is for engaging in a formalized activity, considering the amount of time and energy it requires, unless there exists some extrinsic stimulus. Such a comment may appear cynical but the number of times higher degree or diploma work was mentioned (in connection with departmental and institutional evaluations as well) suggests a career incentive that cannot lightly be dismissed.

References

Brown, S., McIntyre, D., and Impey, R. (1979) ‘The evaluation of school science departments', Studies in Educational Evaluation, 5, pp. 175–86.

Cambridge Accountability Project (1981) Case Studies in School Accountability (3 volumes). Cambridge Institute of Education.

DES (1978) Curriculum 11–16: Working Papers by H.M.I. Inspectorate. HMSO.

DES (1984) Records of Achievement: A Statement of Policy. HMSO.

East Sussex Accountability Project (1980) Accountability in the Middle Years of Schooling: An Analysis of Policy Options. Brighton: University of Sussex (mimeo).

Ebbutt, D. (1981) ‘Springdale’, in Cambridge Accountability Project, Case Studies in School Accountability Volume 3. Cambridge Institute of Education.

Elliott, J. (1980) SSRC Cambridge Accountability Project: A Summary Report. Cambridge Institute of Education (mimeo).

James, M. (1982) A First Review and Register of School and College Initiated Self-Evaluation Activities in the United Kingdom. Milton Keynes: Educational Evaluation and Accountability Research Group, Open University (mimeo).

Open University (1982a) Course E364 Curriculum Evaluation and Assessment in Educational Institutions: Block 1: Accountability and Evaluation. Open University Press.

Open University (1982 b) Course E364 Curriculum Evaluation and Assessment in Educational Institutions: Case Study 2: Stantonbury Campus. Open University Press.

Sockett, H. (1980) ‘Accountability: The contemporary issues’. In H. Sockett, (ed) Accountability in the English Educational System. Hodder and Stoughton.

Stake, R. (1967) ‘The countenance of educational evaluation’, Teachers College Record, 68, pp. 523–40.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset