Excerpt
Inhalt
Introduction
Literature Review
Research Methods
Data Analysis and Discussion
Conclusion
References
List of Appendices
TE2: Assessment and Monitoring
Introduction
In this discussion, I evaluate my practice with regards to assessment and monitoring progress. I first consider the relevant literature and explore assessment of learning and assessment for learning, as well as suggestions for monitoring pupil progress. Assessment for learning (AfL) is defined by Driscoll, Macaro and Swarbrick (2013, p.150) as: ‘formative assessment and the practical implementation of this by teachers in classrooms’. Assessment of learning (AoL), often referred to as summative assessment , ‘identifies the standard of attainment achieved at a particular moment in time’ (Kyriacou, 2014, p.167). I then discuss my approach in my research design, followed by a critical evaluation of my progress so far, supported by my documentary evidence.
Literature Review
As a key part of the teaching and learning process, assessment is a frequent focus in educational research, both generally and specifically in modern languages (ML). The Office for Standards in Education (2011, p.31) state that best practice includes ‘regular assessment of all four language skills’, while the Department for Education (DfE) (2013) make no specific mention of assessment in the National Curriculum. This shows that there are multiple viewpoints on the importance of assessment and monitoring.
Pachler et al (2014, p.381) argue that recording pupil progress aids future planning, and that the mark book is an important tool. They extend Ofsted’s expectations of assessment: a teacher should record not only how pupils are progressing in key skill areas, but also ‘how they are using the language, their levels of contribution and confidence’ (p.382). Kyriacou (2014, p.181) agrees that progress records should reflect ‘different aspects of attainment’. It could be said that one purpose of education is to ready pupils for future careers, and for this they need a wide range of skills which are not all academic. Pacher, Evans and Lawes (2007, p.87) note ‘how little value is afforded to the ability of working effectively in a team’ despite the fact that many employers value this skill.
There are, however, benefits to traditional assessment methods. Gipps and Stobart (1993, p.12) talk of positive achievement in relation to the GCSE exam: it allows pupils to show what they can do. Olivier (1990, p.49) mentions marking and feedback as a key part of assessment and monitoring and claims that it ensures that ‘the student, the teacher and the parent are all aware of what is expected and of whether the expectations have been met’. It is important that parents are kept informed of progress to maintain a good working relationship between school and home. Using records of progress as a means of reporting that progress to others applies not only to parents, but also to future teachers, schools, universities or even employers (Kyriacou, 2014, p.181).
There are, of course, drawbacks to assessment, most significantly the marking involved. Smith and Conti (2016, p.196) claim that formative assessment provides ‘ongoing feedback’ for both teachers and pupils. This often takes the form of comments and, as Lui and Carless (2006, p.9 in Irons (2008) state, ‘managing time and workloads are significant challenges in the provision of feedback’. Is this time and effort worth it? Jones and Wiliam (2007) agree that teachers are under pressure to ‘work more effectively in order to improve students’ grades’, but teachers often do not have a say in how they mark. Departmental and school-wide policies on marking can have specific requests (Smith and Conti, 2016, p.203) which may not always seem logical but nevertheless must be followed (Pachler et al, 2014, p.373).
Marks and feedback are not the only options when it comes to assessing and recording progress. In fact, Ofsted (2011, p.31) claimed that the way ‘students’ work was marked was often ‘unhelpful’. Teachers turned to alternative assessment methods to save time and present a more accurate picture of pupil progress. AfL has ‘taken a central place in the educational assessment arena’ (Jones, 2013, p.150), and pupils taught in this way can ‘become independent and well-motivated learners’ (p.155). This sounds very positive, but how does AfL work in practice?
AfL, or formative assessment, is not entirely separate from summative (AoL). Summative assessments can even be used formatively (Smith and Conti, 2016, p.197) when the results are used to aid planning or to set targets. Black et al (2003) carried out studies relating to AfL, and found that allowing thinking time after questions or asking pupils to discuss ideas in pairs increased participation significantly (pp.70-75). Pacher et al (2014, p.370) agree that this has benefits such as longer answers, fewer pupils refusing to answer and pupils elaborating on each other’s’ answers.
Research Methods
The data collected for this piece of research were documentary evidence, such as pupil work and marking policies. Documents present a view of a specific point in time (Burton, Brundrett & Jones, 2008, p.111) and some, such as the marking policies, were not created for this assignment and are therefore less biased than information I chose to collect, such as the pupil work.
I tried to prevent bias by questioning my practice at each stage and choosing random samples of work where possible, as Check and Schutt (2012, p.266) suggest. Due to the nature of my research, however, this was not always possible. To show a spread of data I selected high, low, and middle range examples from assessments. This purposeful choosing of evidence is similar to a purposive sample, as described by Cohen et al (2011, p.156). It is not a true representation of the available data but offers certainty that the traits I wish to examine in my research are present.
As I examined the data collected, I referred to the literature on assessment to cross-reference my findings. I was unable to triangulate using multiple methods of data collection, as this piece of research did not involve questionnaires, interviews or similar. This is the main weakness of my research, as Check and Schutt (2012, p.267) and Bell with Waters (2014, p.187) both state the need for triangulation in research and confirm that the accuracy of a piece of research increases with the number of data sources used.
I keep all work and policies anonymous and sought appropriate consent for the use of the documents in question prior to conducting my research, in accordance with the British Education Research Association (BERA) guidelines (2011, p.5), as evidenced in my ethics form (appendix one).
Data Analysis and Discussion
One of the first documents I received at my current placement was a grid outlining the dates various pieces of information on pupil progress were needed (appendix three). Olivier (1990) and Kyriacou (2014) both mention the importance of recording progress to later report it, and the detail in this document shows that the school recognises this importance. One of my targets from my last placement was to begin recording progress earlier to have a full record and diligent use of the Reporting Progress document aided me in this. The progress data involved is mostly summative (AoL): assessment ‘at the end of a unit of work, term, year or course’ (Pachler et al, 2014, p.367). This is important information as a student teacher, as well as providing information about how pupils have progressed from one unit to the next, it shows, to a certain extent, how well the teaching has worked (Pachler et al, 2014, p.367). I therefore collected AoL data not only at the school’s request, but also for my professional development.
At the beginning of this placement I collected marks for end-of-unit reading and listening assessments from a Year 9 group for my mark book (appendix 2.4). I intend to compare these with the next set of end-of-term assessments to monitor both learner and teacher progress. Jones (2013, p.154) supports the fact that looking for evidence of learning is more valuable than merely evaluating teaching in isolation. As well as the marks, I made sure to copy a selection of test papers to include context (appendices six and seven), as suggested by Pachler et al (2014, p.383). I started doing this in my first placement, but this time made sure to include a range of marks from the top, middle and lower end to help check for consistency in marking, which Ofsted (2011, p.31) say can be an issue.
Mutton (2014, p.38) adds that a mark alone does not show which questions commonly posed problems. To overcome this, I calculated average marks for questions in a recent Year 7 assessment (appendix thirteen), and will use this in my future planning to either cover a specific topic again or use as an opportunity to discuss exam technique.
Other key documents which I used to help me with AoL were the Key Stage 3 and 4 marking policies (appendix four) and mark schemes (appendix five). Pachler et al (2014, p.373) state the importance of following school or departmental policies on marking, especially for student teachers, who may need the extra guidance. I struggled to mark a set of speaking assessments during my first placement and made sure to get help from my mentor when starting to mark using the mark schemes during this second placement. Thanks to these documents and suggestions from the literature, my practice regarding
The Key Stage 3 policy (appendix 4.1) let me know when to mark books and how much detail should be included in written feedback. The policy states that for a longer piece of written work, a mark and comment should be included, in accordance with the Key Stage 3 writing mark scheme (appendix five), but only comments for class and homework. I marked some writing completed by a Year 7 group as AfL (appendix eleven): I wanted them to be motivated by their marks and learn to improve from their comments. Jones and Wiliam (2007, p.13) claim that giving both marks and comments is a compromise but is, in fact, the same as giving marks: ‘the high-achieving students don’t need to read the comments and the low-achieving students don’t want to’. I had no choice about this – I had to follow the policy, but there were a few things I could do to make the feedback effective. I used a what went well (WWW) and an even better if (EBI) comment, but perhaps in the future I could get the pupils to do this themselves or for a partner. Beaton (1990, p.46) states that pupils are harder on each other than a teacher in such exercises, but also that they listen to each other, because of ‘pupil speak’ (Black et al, 2003).
Another way I attempted to make my marking more effective was by using codes for errors with a set of Year 8 books (appendix ten) - such as ‘Sp’ for a spelling error or ‘V’ for an incorrect verb form – and having pupils correct them. Jones and Wiliam (2007, p.14) suggest telling pupils how many errors there are, but not where they are. This is a form of self-assessment and can aid learner autonomy (Mutton, 2014, p.33); pupils learn to recognise more frequent errors and how to correct them. Black et al (2003, p.108) agree that it is important to ‘stimulate the students to reflect on where they feel their learning is secure […] and where they need to concentrate their efforts’, for which the comments were arguably more effective than the error correction. In the Year 7 writing assessments (appendix fourteen) and exit tickets used as AfL (appendix eight), I have therefore corrected the errors myself, and will use the information to aid future planning.
I have included a range of information on my mark sheets (appendices 2.1-2.5), not only marks but also a judgement on effort, current attainment, as well as actions if this is not as expected (appendix 2.6). While regular testing such as vocabulary (appendix nine) is important (Kyriacou, 2014 p.176), it is also important to give feedback about effort (p.181).Pachler et all (2014, p.377) agree that teacher judgements are ‘essential’ in the monitoring process.
Conclusion
Teaching standard six (DfE, 2011) states: ‘A teacher must make accurate and productive use of assessment.’ To make sure my use of assessment is accurate, I have used a variety of assessment activities and aimed to monitor progress in all four ML skill areas, as well as more general information on effort and contribution. I have made use of both formative and summative assessment to ‘secure pupils’ progress’ (DfE, 2011). My practice regarding formative assessment has particularly improved since my last placement; I continue to employ good questioning techniques but now allow for more thinking time and more pair discussion, as much of the research suggests. I give pupils regular feedback (DfE, 2011), though this is an area in which I could improve, especially regarding written feedback for speaking (Mutton, 2014, p.37). I also ‘encourage pupils to respond to the feedback (DfE, 2011) and will continue to develop my use of comment-based marking to help pupils reflect on not only their successes, but also how they can improve.
Finally, I will use the data I have collected to ‘set targets and plan subsequent lessons’ (DfE, 2011), to ensure progress for the learners and my own teaching.
References
- Beaton, R (1990) ‘The many sorts of error’, in: Page, B. (ed) What do you mean…it’s wrong? London: CILT
- Bell J. with Waters S. (2014) Doing Your Research Project: A Guide for First-time Researchers. 6th edn. Maidenhead: Open University Press
- Black, P., Harrison, C., Lee, C., Marshall, B. and Wiliam, D. (2003) Assessment for Learning: Putting it into Practice. Maidenhead: Open University Press
- British Educational Research Association (2011) Ethical Guidelines for Educational Research. London: British Educational Research Association
- Burton, N., Brundrett, M., Jones, M. (2008) Doing Your Education Research Project. London: SAGE
- Check, J. and Schutt, R. (2012) Research Methods in Education. California: SAGE
- Cohen, L., Manion, L. & Morrison, K. (2011) Research Methods in Education. 7th edn. Oxon: Routledge
- Department for Education (2011) Teachers’ Standards. Available at: https://www.gov.uk/government/publications/teachers-standards (accessed on 1st April 2017)
- Department for Education (2014) National Curriculum in England: Secondary Curriculum. Available at: https://www.gov.uk/government/publications/national-curriculum-in-england-secondary-curriculum (accessed 1st April 2017)
- Gipps, C. and Stobart, G. (1993) Assessment: A Teachers’ Guide to the Issues. 2nd edn. Kent: Hodder and Stoughton Ltd.
- Irons, A. (2008) Enhancing Learning Through Formative Assessment and Feedback. London: Routledge
- Jones, J. and Wiliam, D. (2007) Modern Foreign Languages inside the Black Box. London: King’s College London
- Jones, J. (2013) ‘Developments in Formative Assessment: A retrograde step for teaching and learning?’ in: Driscoll, P., Macaro, E. and Swarbrick, A. (eds.) (2013) Debates in Modern Languages Education. Oxon: Routledge, pp. 150-162
- Kyriacou, C. (2014) Essential Teaching Skills. Oxford: Oxford University Press
- Mutton, T. (2014) ‘Developing foreign language skills through formative assessment’, in: Pachler, N. and Redondo, A. (eds.) A Practical Guide to Teaching Languages in the Secondary School. 2nd edn. Oxon: Routledge pp. 27-41
- Olivier, R. (1990) ‘Learner Autonomy’ in: Page, B. (ed) What do you mean…it’s wrong? London: CILT
- Pachler, N., Evans, M. and Lawes, S. (2007) Modern Foreign Languages: Teaching School Subjects 11-19. London: Routledge
- Pachler, N., Evans, M., Redondo A. and Fisher L. (2014) Learning to Teach Foreign Languages in the Secondary School. 4th edn. Oxon: Routledge
- Office for Standards in Education (2011) Modern Languages: Achievement and Challenge 2007-2010. Available at: https://www.gov.uk/government/publications/modern-languages-achievement-and-challenge-2007-to-2010 (accessed 1st April 2017)
- Smith, S. and Conti, G. (2016) The Language Teacher Toolkit. Leipzig: Amazon Distribution GmbH
List of Appendices
Appendix One - Ethics Form
Appendix 2.1 – Class Mark Sheet One
Abbildung in dieser Leseprobe nicht enthalten
[...]