Chapter 10
Assessment in Harmony with Our Understanding of Learning

James, M. (2012) ‘Chapter 12: Assessment in harmony with our understanding of learning: problems and possibilities’. In J. Gardner, (ed) Assessment and Learning. Second Edition (London: Sage): 187–205.

Discussion of classroom assessment practice and implications for teachers’ professional learning, draws attention to the close relationship between assessment and pedagogy. Indeed, the central argument is that effective assessment for learning is central and integral to teaching and learning. This raises some theoretical questions about the ways in which assessment, on the one hand, and learning, on the other, are conceptualized and how they articulate. This chapter considers the relationship between assessment practice and the ways in which the processes and outcomes of learning are understood.

Starting from an assumption that there should be a degree of alignment between assessment and our understandings of learning, I have (James, 2006) described and analyzed the perspectives on learning that might be extrapolated from a number of different approaches to the practice of classroom assessment. Learning theorists themselves rarely make statements about how learning processes and outcomes within their models should be assessed, which may account for the lack of development of assessments aligned with some of the most interesting new learning theory. The intention of the earlier chapter was therefore to examine more closely the implications for assessment practice of three clusters of learning theories – behaviourist, cognitive constructivist and socio-cultural – and discuss whether eclectic or synthetic models of assessments, matched to learning, are feasible.

This chapter has a slightly different purpose. Instead of providing a review across the range of historic theories, it focuses on the problems and possibilities of developing assessment practice congruent with socio-cultural learning theory. There are three particular reasons for this decision, all connected with my more recent work. The first is that discussion of models of assessment consonant with socio-cultural perspectives on learning emerged as a common theme (James, 2010) in a significant number of the 53 articles, from across the world, in the ‘Educational Assessment’ section of the International Encyclopedia of Education (Peterson et al., 2010). The second is that most of the researchers involved in the 90 projects and thematic groups of the UK Economic and Social Research Council’s Teaching and Learning Research Programme (TLRP) claimed to take a socio-cultural or social contructivist theoretical position (James and Pollard, 2011). The TLRP aimed to ‘improve outcomes for learners of all ages in teaching and learning contexts across the UK’. As such, researchers had an obligation to consider what those outcomes might be and how they might be evaluated, although this task often proved challenging (James and Brown, 2005). The third reason is that when classroom teachers were introduced, at a conference in late 2006, to some of the ideas in the previous version of this chapter, they responded very enthusiastically to what I tentatively named ‘Third Generation Assessment’ (James, 2008). Without my knowledge at the time, a few went back to their schools to try to develop it. Some of the results are described later in this present chapter.

Alignment between assessment and learning

The alignment (Biggs, 1996; Biggs and Tang, 1997) of assessment with curriculum and pedagogy is a basis for claims for the validity of assessments but the relationship is not straightforward. Indeed there are plenty of examples of assessment practices that have only tenuous or partial relationships to current understanding of learning within particular domains. Take, for instance, short answer tests in science that require recall of facts but do not begin to tap into understanding of concepts or the investigative processes that are central to science as a discipline. Nor do assessment practices always take sufficient account of current understanding of the ways in which students learn subject matter, the difficulties they encounter and how these are overcome.

Historically, much assessment practice was founded on the content and methods of psychology, especially the kind that deals with mental traits and their measurement. Thus classical test theory has primarily been concerned with differentiating between individuals who possess certain attributes, or determining the degree to which they do so. The focus tended to be whether some behaviour or quality could be detected rather than the process by which it is acquired. However, during the twentieth century understanding of how learning occurs developed apace. Learning was no longer seen as largely related to an individual’s possession of innate and generally stable characteristics such as general intelligence. Interactions between people, and mediating tools such as language, came to be seen as having a crucial role. Thus the assessment of learning needs to take more account of the social as well as individual processes through which learning occurs. This requires expansion of perspectives on learning and assessment that take more account of insights from the disciplines of social-psychology, sociology and anthropology.

In practical terms, while exciting new developments in our understanding of learning unfold, developments in assessment systems and technology have lagged behind. Even some of the most innovative and novel developments, say, in e-assessment, are underpinned by models of learning that are limited or, in some cases, out of date. This is understandable too because the development of reliable assessments – always an important consideration in large-scale testing – is associated with an elaborate technology which takes much time and the skills of measurement experts, many of whom have acquired their expertise in the very specialist field of psychometrics. This is especially true in the United States, which has a powerful influence on practice in other countries.

I am primarily interested in classroom assessment by teachers, but research tells us that teachers’ assessment practice is inevitably influenced by external assessment (Harlen, 2004) and teachers often use these assessments as models for their own, even if they do not use them directly. By using models of assessment borrowed from elsewhere, teachers may find themselves subscribing, uncritically or unwittingly, to the theories of learning on which they are based. This raises a question about whether it really matters what conceptions of learning underpin classroom assessment practices if they are deemed to ‘work’ well enough, and whether the need for consistency between teaching, learning and assessment might be overrated.

My view is that it does matter because some assessment practices are very much less effective than others in promoting the kinds of learning needed by young people today and in the future (James and Brown, 2005). The learning outcomes of most value to enable human flourishing – as citizens, as workers, as family and community members and as fulfilled individuals – are those that enable them to continue learning, when and where required, in a rapidly changing, information-and technology-rich environment. There is a need, therefore, for teachers to have a view about the kinds of learning that are most valuable for their students and to choose and develop approaches to teaching and assessment accordingly.

Helping teachers to become more effective may therefore mean both change in their assessment practice and change in their beliefs about learning (James et al., 2007). It will entail development of a critical awareness that change in one will, and should, inevitably lead to the need for change in the other. So, for instance, implementing assessment for learning/formative assessment may require a teacher to rethink what effective learning is, and his or her role in bringing it about. Similarly a change in their view of learning is likely to require assessment practice to be modified. While my focus is mainly on formative assessment, a good deal is relevant to classroom-based summative assessment by which teachers summarize what has been achieved at certain times.

Theoretical foundations of learning and implications for assessment

Within the literature on learning theory, three clusters of theories are often delineated. In the US literature (Greeno, et al., 1996; Bredo, 1997; Pellegrino et al., 2001) the three perspectives are often labelled ‘behavorist’, ‘cognitive’ and ‘situated’ but within the UK, drawing in more of the European literature, the labels ‘behaviourist’, ‘constructivist’, and ‘socio-cultural’ are sometimes preferred. These two sets of labels are roughly equivalent. For the benefit of teachers, Watkins (2003) has translated these different views of learning into descriptions: (1) Learning is being taught; (2) Learning is individual sense- making; and (3) Learning is building knowledge as part of doing things with others. Each of these perspectives is based on a view of what learning is, and how it takes place, but implications for assessment are rarely developed.

Behaviourism (learning is being taught) has fallen out of favour and there are few who would now subscribe to the view that learning is simply the conditioned response to external stimuli, and that rewards and punishments are powerful ways of forming or extinguishing habits. Another unfashionable tenet of behaviourism is that complex wholes are assembled out of parts, so learning can best be accomplished when complex performances are deconstructed and when each element is practised and reinforced and subsequently built upon. From this perspective, achievement in learning is often equated with the accumulation of skills and the memorization of information in a given domain, demonstrated in the formation of habits that allow speedy performance. Thus progress is often measured through unseen, timed tests with items taken from progressive levels in a skill hierarchy. Performance is usually interpreted as either correct or incorrect and poor performance is remedied by more practice on the incorrect items, sometimes by deconstructing them further and going back to even more basic skills.

Although learning theory has moved on, assessment practice stemming from behaviourism persists. For example, the approach is evident in many vocational qualifications post-16 where learning outcomes are broken down into tightly specified components. In the early days of the National Curriculum in England the disaggregation of attainment levels into atomised statements of attainment reflected this approach, as did the more recent assessment guidelines for Assessing Pupils’ Progress, associated with the pre-2010 Labour Government’s National Strategies. The widespread and frequent use of practice tests to enhance scores on national tests for 11-year-olds in England also rests on behaviourist assumptions about learning.

Cognitive constructivist theories of learning (learning is individual sense-making) have a much larger group of advocates today, recently joined by influential neuroscientists. Their particular focus is on how people construct meaning and make sense of the world by developing mental models. Prior knowledge is regarded as a powerful determinant of a student’s capacity to learn new material. There is an emphasis on ‘understanding’ (and eliminating misunderstanding), and problem solving is seen as the context for knowledge construction. Differences between experts and novices are marked by the way experts organize ‘salient’ knowledge in structures that make it more retrievable and useful. From this perspective, achievement is framed in terms of understanding concepts and their relationships, and competence in processing strategies. The two components of metacognition – self-monitoring and self-regulation – are also important dimensions of learning.

This perspective on learning has received extensive attention for its implications for assessment. The two companion volumes produced by the US National Research Council (Bransford et al., 2000; Pellegrino et al., 2001) are perhaps the best examples. In view of the importance attached to prior learning as an influence on new learning, formative assessment emerges as an important, integral element of pedagogic practice because it is necessary to elicit students’ mental models (through classroom dialogue, open-ended assignments, thinking-aloud protocols), in order to scaffold their understanding of knowledge structures and to provide them with opportunities to apply concepts and strategies in novel situations. In this context teaching and assessment are blended towards the goals of learning, particularly closing gaps between current understanding and the new understandings sought. It is not surprising therefore that many formulations of formative assessment are associated with this particular theoretical orientation. Some experimental approaches to summative assessment are also founded on these theories of learning, for example the use of computer software applications for problem solving and concept-mapping as a measure of students’ learning of knowledge structures. However, these assessment technologies are still in their infancy and much formal testing still relies heavily on behavioural approaches.

The socio-cultural perspective on learning (learning is building knowledge as part of doing things with others) is often regarded as a new development, but Bredo (1997) traces its intellectual origins back to the conjunction of functional psychology and philosophical pragmatism in the work of William James, John Dewey and George Herbert Mead at the beginning of the twentieth century. The interactionist views of the Chicago School, which viewed human development as a transaction between the individual and the environment (actor and structure), had something in common with the development of cultural psychology in Russia, associated with Vygotsky (1978), which derived from the dialectical materialism of Marx (see Edwards, 2005, for an accessible account). Vygotsky quoted Dewey, and, in turn, Vygotsky’s thinking has influenced theorists such as Bruner (1996) in the US and Engeström (1999) in Finland. Other key theorists who regard individual learning as ‘situated’ in the social environment include Barbara Rogoff (1990), Jean Lave and Etienne Wenger (Lave and Wenger, 1991; Wenger, 1998), who draw on anthropological work to characterize learning as ‘cognitive apprenticeship’ in ‘communities of practice’. Given their intellectual roots – in social theory, sociology and anthropology as well as from psychology – the language and concepts employed in socio-cultural approaches are often quite different. For example, ‘agency’, ‘community’, ‘rules’, ‘roles’, ‘division of labour’, ‘artefacts’, ‘contradictions’, feature prominently in the discourse.

According to this perspective, learning occurs in interactions between the individual and the social environment. Thinking is conducted through actions that alter the situation and the situation changes the thinking; the two constantly interact. Especially important is the notion that learning is a mediated activity in which cultural artefacts have a crucial role. These can be physical artefacts such as books and equipment but they can be symbolic tools such as language. Since language, which is central to our capacity to think, is developed in relationships between people, social relationships are necessary for, and precede, learning (Vygotsky, 1978). Thus learning is a social and collaborative activity in which people develop their thinking together.

Learning therefore involves participation, and what is learned is not the property of an individual but distributed within the social group. For example, an ability to use language skills is not solely an indication of individual intelligence but of the intelligence of the community that developed the language, which the individual then puts to use. Thus the collective knowledge of the group or community is internalized by the individual. Similarly, as an individual creates new knowledge, for example, a new way of using a tool, then he or she will externalize that knowledge in communicating it to others who will put it to use and then internalize it. Thus knowledge is created and shared in expansive learning cycles.

Vygotsky’s theory of goal-oriented, tool-mediated learning activity can encompass learning outcomes associated with notions of learning as acquisition of knowledge and learning as participation in activity (Sfard, 1998). But it also embraces outcomes associated with creativity, because it provides a description of how knowledge and practices can be transformed. In other words, it can encompass a very wide range of outcomes: higher and lower mental processes; attitudinal, cognitive and behavioural outcomes; individual and shared activity; problem-solving processes and products; the acquisition of existing knowledge and the creation of new knowledge.

As I have argued elsewhere (James, 2010), Vygotsky also has something to offer on the issue of progression. The structures of grades, scales and attainment levels have accustomed us to regard progression as step-by-step and linear. According to Vygotsky, the mastery of tools of the mind takes place in the zone of proximal development (ZPD). As Grigorenko (1998: 210–11) points out:

The word zone refers to the nonlinearity of children’s development. In this zone, the child might move forward or backward, to the left or to the right. . . . The main characteristic of the ZPD is its sensitivity to the child’s individuality – its responsiveness to the unique profile of each child’s skills. The ZPD is of tremendous importance educationally. Generally, when educators evaluate a child’s skills, they focus on what the child demonstrates in his or her independent performance. . . . Vygotsky stated that the level of independent performance is an important, but not the only, index of development. To account for the dynamic process of development, we should consider the level of the child’s assisted performance.

There are two points to be made here. First, there is value in plotting a learner’s development as a profile within a zone and encouraging them to expand, deepen and enrich their knowledge, skills and understanding. Secondly, there is value in assessing how learners respond to assistance and the introduction of new tools by others. Grigorenko argues that by doing this we are more likely to determine students’ true level, than by administering tests of unassisted performance.

Very often, teachers wait for children to demonstrate completely formed functions; teachers think that they can lead a child to the next step only after they have seen evidence that the child has successfully acquired the function taught at the previous step. As a result, children are limited in their learning opportunities to those that correspond to their level of independent performance. In other words, teachers who follow this standard practice minimize the student’s ZPD by almost closing it. On the contrary, ideally, the ZPD should be wide open, so that it can expand developmentally appropriate practices up to the level of assisted performance.

(Grigorenko, 1998: 212)

Vygotsky’s view that what learners can do at lower levels should not limit opportunities for their development of higher psychological functions was almost certainly influenced by his experience of working at a boarding school for children who were both deaf and blind. He claimed that education targeted on the formation of higher mental processes could help people overcome, or compensate for, deficits because it equipped them with adaptive strategies that might be unique for each person.

All of these ideas have implications for assessment. Practical applications of Vygotsky’s theories have been developed in versions of ‘dynamic assessment’. Some forms of dynamic assessment have been described as little more than superior intelligence tests, but there are clinical versions (Elliott et al., 2010) that use ‘hint structures’ in scaffolded instruction to maximize feelings of competence and efficacy. This affects test reliability, in a psychometric sense, although advocates claim that these concerns are less important than the quality of the insights that result and the likely magnitude of change in a learner’s performance (i.e. their validity). Dynamic assessments of this nature are currently used in one-to-one situations, usually with students with learning difficulties, therefore they are often felt to be impractical for wider application. However, the principles could well be adapted for everyday use by teachers if they were to be integrated with instructional strategies. Moreover they could serve both formative and summative assessment purposes although the formative would take priority. In this sense, versions of dynamic assessment could provide tools for formative assessment integrated into pedagogy.

Suggestions for socio-cultural assessment practice

As noted earlier, much current assessment practice derives from behaviourist or differentialist approaches, many of which are now rejected as ‘learning theories’ except in very limited senses. Pellegrino et al.’s (2001) important work provides both the underpinning theory and practical examples of more valuable forms of assessment based on sound cognitive science. An Open University Reader (Murphy, 1999) covers some of the debates that have led to socio-cultural approaches gaining ground in education but notes ‘a mismatch between curriculum and assessment rhetoric and teaching and learning practice’ (1999: xiii). In the absence of detailed guidance on what classroom assessment might look like if it were informed by socio-cultural theory, I (James, 2008: 31) have offered the following pointers to what I have styled ‘third generation assessment’:

  • If learning cannot be separated from the actions in which it is embodied, then assessment too must be 'situated'.
  • Assessment alongside learning implies that it needs to be done by the community rather than by external assessors.
  • Assessment of group learning is as important as the learning of the individual.
  • 'In vivo' studies of complex problem solving may be the most appropriate form for assessments to take. (Some ethnographic methods could be used, including methods for assuring quality of inferences from evidence.)
  • The focus should be on how well people exercise 'agency' in their use of the resources or tools (intellectual, human, material) to formulate problems, work productively and evaluate their efforts. This would be a proper justification for course-work assignments with students having access to source materials, because it is the way that these are used that is of most significance.
  • Learning outcomes can be captured and reported through various forms of recording, including narrative accounts and audio and visual media. The portfolio has an important role here.
  • Evaluation needs to be more holistic and qualitative, not atomized and quantified as in measurement approaches.

Two examples

Given the preceding discussion, it is not surprising that paradigm examples of a socio-cultural approach to assessment are difficult to find. There is still much work to be done to find ways of bringing assessment into better alignment with some of the most powerful ideas in contemporary learning theory. Nevertheless, two initiatives, one in United States high schools and the other in an infants’ school in the East of England, illustrate some possibilities at school and local level. System-wide applications are still somewhat distant.

Performances and exhibitions from the US Coalition of Essential Schools

The Coalition of Essential Schools (CES) was founded by Theodore Sizer, who aims to create and sustain schools that are personalized, equitable, and intellectually challenging. Rejecting the behaviourism that had long dominated approaches to assessment in the US, Sizer (1992) promotes a model of alternative authentic assessment based on ongoing performances or ‘exit’ exhibitions in which learning outcomes across disciplines might be demonstrated and evaluated. The exhibition is intended to bring together a number of important dimensions of learning and meaningful assessment.

  • It asks students to work across disciplines in a respectful way by creating 'real' learning activities. The dominant metaphor is 'student-as-worker'.
  • Tasks are not necessarily devised by teachers; students can devise them for themselves, providing they understand the principles that underlie their construction. Helping students to acquire this meta-level understanding is a valued pedagogical objective.
  • It asks students to practise using accumulated knowledge and apply it to new situations.
  • It insists on effective communication in a number of forms of expression: oral, written and graphic.
  • It requires that students be reflective, persistent and well organized.
  • It creates a focus for learning by describing the destination for their journey, although precise learning objectives are not tightly pre-specified. The teachers hope to be delighted and surprised by the learning that their students demonstrate.

An outline for an exhibition is given in Figure 10.1.

Figure 10.1 A Final Performance Across the Disciplines. Source: http://www.essentialschools.org/resources/123.

Figure 10.1 A Final Performance Across the Disciplines.

Source: http://www.essentialschools.org/resources/123.

Grant Wiggins, the former director of research at the CES, described such ‘authentic assessments’ as having the following intellectual design features:

  • They are 'essential' - not needlessly intrusive, arbitrary, or designed to 'shake out' a grade.
  • They are 'enabling' - constructed to point the student towards more sophisticated use of the skills or knowledge.
  • They are contextualized, complex intellectual challenges, not 'atomized' tasks corresponding to isolated 'outcomes'.
  • They involve the student's own research or use of knowledge, for which 'content' is a means.
  • They assess student habits and repertoires, not mere recall or plug-in skills.
  • They are representative challenges - designed to emphasise depth more than breadth.
  • They are engaging and educational.
  • They involve somewhat ambiguous tasks or problems.

Given the multi-disciplinary nature of the exhibition, it is evaluated by a panel of assessors made up of the teacher who acted as the main tutor or supervisor of the work, another adult but not necessarily a teacher (a business representative might be appropriate), and a peer of the student’s own choice. The student is aware of the broad criteria by which the exhibition will be assessed because these are negotiated at the beginning of preparation for the task. About 50 minutes is allocated to each exhibition, half of which is devoted to the student’s presentations and the other half to discussion with the panel. The panel would be expected to ask penetrating questions about what has been learned, the completed research, reflections on the work covered and its relationship to the broader field, tentative hypotheses and ideas about further work, and reflections on the learning process itself (meta-learning). In some ways this process can be compared to the viva voce for the award of PhD degrees in the UK. Indeed, Sizer claims: ‘It began in the eighteenth century, as the exit demonstration in New England academies and in colleges like Harvard. The student was expected to perform, recite, dispute, and answer challenges in public session’.1

Third generation assessment in an infants' school in England

Jenny Lewis, a teacher at Recreation Road Infant School, Norwich, attended a talk on assessment and learning that I gave in 2006. She made a connection between my speculative account of a socio-cultural approach to assessment, which I labelled ‘third generation assessment’, and an approach to curriculum development and pedagogy that she had been developing with colleagues in her school. She returned to her school and, with the support of Luke Abbott, a senior local authority adviser, set up an action research project to investigate whether it was possible to develop assessment practices that are in harmony with her chosen pedagogy. The following is an edited version of her account of the project.2 The version here is reproduced with her permission.

I am a Year 2 teacher in a large Infant School (4 form entry) near the centre of Norwich. I initially conducted my research with my own class, but then extended the project to include other colleagues. The school has a creative, flexible and emergent curriculum and we use thinking skills, philosophy, drama and inquiry based learning to underpin our teaching and learning. My head actively promotes adult learning, innovation and risk taking and has been involved in the action research group as part of a drive to create a model of excellence in our school approach to formative assessment.

Several colleagues involved in the project, including myself, use a pedagogic system called Mantle of the Expert (MoE). MoE is an approach to learning devised by education and drama practitioner Dorothy Heathcote. Children and teachers work together to create an imaginary community within which they function as if they were experts, for example mountain rescuers or archaeologists. As the work progresses many possibilities begin to emerge which the learning community uses to define and deepen the imaginary world and explore the lives of the people that inhabit it. The community engages in a series of collaborative tasks, often motivated by a client’s demands, with teamwork, communication and problem solving central to the process.

There is a group responsibility for the project as it progresses and the children act and make decisions with responsibility and authority, tackling authentic issues that seem purposeful and urgent to them. Over the last three years I have worked with a team of practitioners, guided by Luke Abbott, on developing the use of MoE as a pedagogy in the classroom.3

One of the two Year 2 classes that I taught during this project worked as a salvage company responsible for exploring the sunken Titanic. Some of the colleagues involved in the action research group were running their own MoE projects and others were working with their classes on a range of other inquiry-based learning experiences. We all employed a range of assessment for learning systems to trial with our classes and collected evidence to report back to the group. The group then discussed and reflected on these practices and worked collectively to relate them to Mary James’s 3G model.

My personal aim was to develop a cyclical and meaningful assessment system based on the third generation model that worked both for me, and the children. I needed a system that was achievable, relevant and based firmly on a socio-cultural view of learning.

Some of the assessment practices we used are standard AfL practices used in many schools and they combine 2G (cognitive constructivist) and 3G (socio-cultural) elements. We have sometimes adapted these to make them more aligned with 3G assessment principles. Others are combinations of our own ideas and those of other researchers and practitioners. The following show the range of assessment practices we trialled:

  1. Dialogue as an integral part of ongoing work involving the whole class, groups or pairs of children, with or without an adult.
  2. A Blog Diary recording the day-to-day life of our shipwreck salvage company (MoE) October 2006 – May 2007. (This can be viewed at: http://theseacompany.blogspot.com/) This was a diary, recorded on a blog-site, so that it was available for anyone to read as our MoE project work progressed. Although time consuming to construct it was an invaluable way of recording and reflecting upon the learning of the class community. It revealed that assessment could happen alongside learning and not as an ‘after learning event’, as in a plenary session at the end of the day! An additional advantage of the blog-site was that the work became open to a wider community so that children, parents and other teachers were able to find out about and understand the learning that has taken place. Comments have been posted from across the world.
  3. An ongoing portfolio containing notes and reflections plus a range of collected evidence (annotated photos, self-assessments, transcriptions of dialogue from Dictaphone, video evidence, pieces of work, etc.)
  4. Individual learning diaries kept by the children.
  5. Daily sessions focusing on meta-learning. The children interact with a puppet (Sniffles the Hopeless Hamster) in a dialogue concerning Guy Claxton’s ‘5 Rs’: resilience, reflection, resourcefulness, responsibility and relationships. The children know that Sniffles finds learning difficult and doesn’t understand how to put the 5Rs into action so the children help him out by giving him lots of advice.
  6. Variety of self/peer assessment tools: easy – hard continuum; hot spot assessments; self-evaluation grids; end of year individual/group evaluations.
  7. Home/school contact books contributed to by teacher, children and parents.
  8. Learning Surgeries. Someone volunteers a problem they have encountered and the rest of the group offer their tips and ideas.
  9. Connections. At the end of a day or a week we brainstorm all the areas of learning we have covered and begin to make connections. These could be between one area of learning and another, or between an area of learning and the wider world.
  10. Questionnaires involving parents and children.
  11. Evidence of children’s work. I see children’s workbooks as ongoing working documents. A piece of totally independent work next to a piece of adult supported work can be incredibly useful.
  12. Personal Social and Health Education (PSHE) linked assessments: The Blob Tree; The Circle of Courage; Feelings Wall. We believe that wellbeing and recognition of feelings and emotions have a huge impact on the way children learn and therefore on the way in which we can help them to understand and move forward in their learning. These tools help the children develop an emotional vocabulary.
  13. Mind Maps.
  14. Peer teaching.

I am aware that many of the assessment practices we have trialled have a mixture of 2G (cognitivist) and 3G (socio-cultural) elements, although the use of situated dialogue, the blog and the portfolio, are very much 3G tools. Also, like any other school, we are required to assess our children in relation to National Curriculum levels, which is a standard 1G (behaviourist) procedure.

Mary James (2008) concludes her article with the question ‘Can all three generations of assessment practice be housed under the same roof? Is inter-generational conflict inevitable?’

I believe that it is possible to house the different elements under the same roof and to blend and bond those key elements. If the statutory (1G) elements are kept to providing NC levels in English, maths and science, we can, as teachers who know our NC, provide this information relatively easily, however little validity we feel they have as an effective way of assessing children’s learning.

I still feel, however, that teachers and schools who are determined to assess what they value, can find a way to keep deep learning and formative assessment at the heart of what they do. Many of the 2G/3G assessment tools we have trialled are quick and easy to implement. Formative assessment as ongoing, situated dialogue and reflection is part of all good primary practice and, just because it may not always be recorded, does not mean that it should be less valued. Some of the other tools we have used, especially the blog and transcriptions of dialogue from a Dictaphone, are much more time-consuming. No one would be able to manage to implement the whole range of tools we have worked on in one classroom. It would be up to headteachers and individual teachers to decide which are the most effective assessment tools to use for them. I always use the ‘2 M’ rule – assessment should be meaningful and manageable!

Discussion

What are the important features of these examples? First, assessment does not drive the curriculum or pedagogy. It starts from a strong belief about worthwhile learning outcomes and learning experiences and how these are best achieved through authentic, collaborative problem-solving tasks. They illustrate how teachers search for assessment models and practices to support their educational goals and the processes of learning they value. Secondly, they possess a broad conception of ‘community’ and the roles that various actors, within and beyond the school, have to play both in learning and assessment. This shift will be vitally important if there is to be any hope of creating models of assessment more congruent with our current best understanding of the learning process and its outcomes. The dominance of psychometric models must, in large measure, be attributable to the fact that parents, employers, policy makers, the media and the general public do not really understand what goes on in classrooms. Therefore they are wedded to proxy measures of learning and achievement that have doubtful validity. If the community can be more directly involved, then confidence in teachers’ assessments and students’ self-assessments can be strengthened. The potential to use the blogosphere, to communicate and assess learning by creating dialogue with near and distant communities, is particularly exciting. Of course, there are problems to be overcome but pursuing the possibilities associated with the Web may be more fruitful than much of the current effort put into the (commercial) development of e-assessments based on quite different models of learning.

The list of characteristics of third generation assessment practices, given above, indicates possibilities for forms of assessment in sympathy with valued teaching and learning practices in schools and workplaces. It is especially suited to the assessment of collaborative group work on complex, extended projects that have a level of authenticity not available to tests. This could make it attractive to employers who increasingly claim they are interested in recruits who can demonstrate their capability to work in teams to find creative solutions to complex problems. It is significant, however, that one of the examples given here comes from an infants’ school, where National Curriculum Assessment is based on teachers’ judgements rather than formal external testing. At other stages of schooling in England, the challenges of developing new approaches, within a system dominated by high-stakes external tests and formal examinations, based on entirely different models, are much greater. The key challenge will be to devise new assessment practices that can command confidence, and meet the range of needs of diverse users of assessment information, when large numbers of students are involved and when those who are interested in the outcomes of such learning cannot participate in the activities that generate them.

Although socio-cultural approaches can make claims for greater validity, they are a long way from providing convincing assurances about the reliability of assessment results. Nevertheless, apprenticeship models, from which many socio-cultural ideas derive, may offer solutions because underpinning such models is the concept of the ‘community of practice’ – the guild which is the guardian and arbiter of developing standards. In other words, validation of standards by a community of experts, or, at least, ‘more expert others’, may be a way of assuring quality.4 Furthermore, the dialogue and interaction, which socio-cultural approaches prioritise, promise profound educational (formative) benefits because, in assessment conversations between students and their teachers and peers, they can deepen their understanding of what counts as appropriate responses to problems or tasks, what counts as quality and how criteria for judgement are interpreted and applied in

Figure 10.2 Some responses from children and parents to the Titanic salvage company work.

Figure 10.2 Some responses from children and parents to the Titanic salvage company work.

complex activities (Sadler, 2010). Clearly, more work needs to be done to develop approaches to assessment that are coherent with socio-cultural perspectives on leaming, but the potential is there.

Notes

1 The source of this example and the quotations is Cushman (1990).

2 Jenny’s complete report can be downloaded from http://www.mantleoftheexpert.com/category/articles/ (accessed 20th February 2011).

3 MOE has a website at http://www.mantleoftheexpert.com (accessed 20th February 2011).

4 Within UK vocational education, systems of internal and external assessors and verifiers have attempted to do this although large- scale systems almost inevitably become bureaucratic, unwieldy and reductive.

References

Biggs, J. B. (1996) ‘Enhancing teaching through constructive alignment’, Higher Education, 32: 347 – 64.

Biggs, J. and Tang, C. (1997) ‘Assessment by portfolio: constructing learning and designing teaching’. Paper presented at the annual conference of the Higher Education Research and Development Society of Australasia, Adelaide, July.

Bransford, J. D., Brown, A. L. and Cocking, R. (eds) (2000) How People Learn: Brain, Mind, Experience, and School. Washington, DC: National Academy Press.

Bredo, E. (1997) ‘The Social Construction of Learning’. In G. D. Phye (ed.) Handbook of Academic Learning: Construction of Knowledge. San Diego, CA: Academic Press.

Bruner, J. (1996) The Culture of Education. Cambridge, MA: Harvard University Press.

Cushman, K. (1990) ‘Performance and exhibitions: the demonstration of mastery’. Paper posted at http://www.essentialschools.org.resources/123, accessed 10 th September 2011.

Edwards, A. (2005) ‘Let’s get beyond community and practice: the many meanings of learning by participating’, The Curriculum Journal, 16 (1): 49–65.

Elliott, J., Grigorenko, E. and Resing, W. (2010) ‘Dynamic Assessment’. In P. Peterson, E. Baker and B. McGaw (eds) International Encyclopedia of Education. Vol. 3. Oxford: Elsevier.

Engeström, Y. (1999). ‘Activity theory and individual and social transformation’. In Y. Engeström, R. Miettinen and R-L. Punamäki (eds) Perspectives on Activity Theory. Cambridge: Cambridge University Press.

Greeno, J.G., Pearson, P.D., and Schoenfeld, A.H. (1996) Implications for NAEP of Research on Learning and Cognition. Report of a Study Commissioned by the National Academy of Education. Panel on the NAEP Trial State Assessment, conducted by the Institute for Research on Learning. Stanford, CA: National Academy of Education.

Grigorenko, E. (1998) ‘Mastering tools of the mind in school (trying out Vygotsky’s ideas in classrooms)’. In R. Sternberg and W. Williams (eds) Intelligence, Instruction and Assessment: Theory and Practice. Mahwah, NJ: Erlbaum: 201–44.

Harlen, W. (2004) A Systematic Review of the Evidence of Impact on Students, Teachers and the Curriculum of the Process of Using Assessment by Teachers for Summative Purposes. London Institute of Education: EPPI-Centre.

James, M. (2006) ‘Assessment, teaching and theories of learning’. In J. Gardner (ed.) Assessment and Learning. London: Sage.

James, M. (2008) ‘Assessment and learning’. In S. Swaffield, Unlocking Assessment: Understanding for Reflection and Application. Abingdon: Routledge (David Fulton): 20–35.

James, M. (2010) ‘An overview of educational assessment’. In P. Peterson, E. Baker and B. McGaw (eds) International Encyclopedia of Education, Vol. 3. Oxford: Elsevier: 161–71.

James, M. and Brown, S. (2005) ‘Grasping the TLRP nettle: preliminary analysis and some enduring issues surrounding the improvement of learning outcomes’, The Curriculum Journal, 16(1): 7–30.

James, M. and Pollard, A. (2011) ‘TLRP’s ten principles for effective pedagogy: rationale, development, evidence, argument and impact’, Research Papers in Education (Special Issue), 26(3): 275–328.

Also published in James, M. and Pollard, A. (eds) (2011) Principles for Effective Pedagogy: International responses to evidence from the UK Teaching and Learning Research Programme. Abingdon: Routledge.

James, M., McCormick, R., Black, P., Carmichael, P., Drummond, M-J., Fox, A., MacBeath, J., Marshall, B., Pedder, D., Procter, R., Swaffield, S., Swann, J. and Wiliam, D. (2007) Improving Learning How to Learn – Classrooms, Schools and Networks. London: Routledge.

Lave, J. and Wenger, E. (1991) Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press.

Murphy, P. (ed.) (1999) Learners, Learning and Assessment. London: Paul Chapman Publishing.

Pellegrino, J. W., Chudowsky, N. and Glaser, R. (eds) (2001) Knowing what Students Know: The Science and Design of Educational Assessment. Washington, DC: National Academy Press.

Peterson, P., Baker, E. and McGaw, B. (eds) (2010) International Encyclopedia of Education. Oxford: Elsevier.

Rogoff, B. (1990) Apprenticeship in Thinking: Cognitive Development in Social Context. New York: Oxford University Press.

Sadler, D. R. (2010) ‘Beyond feedback: Developing student capability in complex appraisal’, Assessment & Evaluation in Higher Education, 35 (5): 535–50.

Sfard, A. (1998) ‘On two metaphors of learning and the dangers of choosing just one’, Educational Researcher, 27: 4–13.

Sizer, T. (1992) Horace’s School: Redesigning the American High School. New York: Houghton Mifflin.

Watkins, C. (2003) Learning: A Sense-Maker’s Guide. London: Association of Teachers and Lecturers.

Wenger, E. (1998) Communities of Practice. Cambridge: Cambridge University Press.

Vygotsky, L.S. (1978) Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset