Monday, May 18, 2015

Entry 56 IDT1415 - Peer Feedback: To what extent is peer-feedback a viable option in language learning?

I believe that there is a place for peer feedback in language learning as argued in my response to Henry's question in week 10. However, the issue of students' lack of assessment skills to be able to confidently implement this approach (Huang 2012:20) prevails in most contexts. I feel strongly about this because my experience as an assessor under various schemes reminds me of how difficult it was initially and hoe only experience has brought about mastery.  As underlined by Hounsell et al. (2007  making reference to the work of Eraut 1995; Morgan 2004; Claxton 1995) there is an ever present need to 'nurture the evaluative 'connoisseurship' or acumen that is expected of experienced assessors and which comes not just from familiarity with marking criteria alone, but from first-hand experience in applying those criteria to a varied range of submitted assignments or assessments and arriving at considered judgments'.  It is this experience I mentioned above of applying criteria on a permanent basis that makes the difference, what students lack and what tutors rarely implement. Also, as argued by Sadler (1989 in Nicol & Macfarlane‐Dick 2006) for students to be able to compare and take action on feedback (and in my opinion be able to give valuable peer feedback) 'they must already possess some of the same evaluative skills as their teacher'. Now, a skill is a technique which has been rehearsed and applied consciously so many times that it has become an unconscious behaviour, that is a skill (Oxford 2011). This tells me that for students to be able to 'have the same evaluative skills as their teacher' as Sadler argues, then students need to be given the opportunities to work on the necessary assessment and self assessment skills which will allow them to acquire develop these skills over time which in fully in line with  Yorke (2003) and Boud (2000 in op.cit.) amongst others  who argue that teachers then need to do more to strengthen their students' self-assessment skills.

As can be seen in Liu & Carless (2006) theirs is an 'ongoing' project which further supports my argument that although possible it is a strategy which requires training over time. I particularly like their idea of peer feedback as dialogue (p280) and see it as the would-be first step or 'precursor' as they call it in my own context while their peer assessment with students grading the work or performance of others as something not viable in the near future in my current institution. 

Many benefits are offered in support of the idea that peer feedback promotes learning. For instance, Falchikov (2001 in op.cit.) provides evidence that peer feedback enhances learning because of the articulation of subject matter with which student engage, students receive faster feedback from their peers than from tutors (Gibbs 1999 in Liu & Carless 2006), learning become public rather than private amongst others. Based on my own experience and evolution as an online student, I particularly agree with their statement that 'Once students are at ease with making their work public, we could create conditions under which social learning might be facilitated' and that the level of threat felt is minimised by the rapport built between peers. However, once again I would argue that as long as peer feedback is the aim, not peer assessment, then this would be a viable path in my context and would fully embrace Brown et al.'s (in Liu & Carless 2006) argument that students rarely resist informal peer feedback for the same three reasons given: 'dislike of judging peers in ways that ‘count’; a distrust of the process; and the time involved.' The latter being especially important as we are often constrained to cover the syllabus in the given time so that students are ready for the exam at the end of the school year.

Unfortunately, out of Carless & Liu's (op.cit.) three suggestions for the future only two would seem viable in my own context: strategies for engaging students with criteria - which already happens to a small extent in the marking of writing papers for Cambridge Preliminary and First Certificate exams, and cultivating a course climate for peer feedback - which is also discreetly and implicitly done through activities in which students display their work around the classroom and are asked to choose the best piece, usually writing, while preparing a justification for their choice either individually or in groups. Peer feedback integrated with peer assessment is far away from becoming a reality because of the constraints mentioned earlier.  It is encouraging to see that Carless & Liu's (2006) strategy for engaging students with criteria and quality is something I already do to some extent as they suggest (p287) involving students in the identification of standards and the criteria representing those standards. In my case, I get students to familiarise themselves in class and out of class with samples of written work marked at different bands with examiner feedback while asking them to identify the standards or features of what make a band 5 in First Certificate written tasks a band 5. This is followed by discussion and although difficult at the beginning of the school year - so 'introduced early on in their course' (Teaching and Learning Centre 2012), through practice and over time as I argued above, they develop to some extent this very specific skill which in turn makes them more aware of their own performance and when applied externally, of their peers'. Along the same lines and as argued by Sadler (2002 in op.cit.) high standard exemplars (typically previous student assignments) are more effective than a focus on criteria. Again, I'm thrilled to see this in the article as for years I have provided my CELTA, YL Extension to CELTA and DELTA trainees with samples of  assignments which I have carefully selected and collected over the years for them to 'see' the criteria 'in place'.


Hounsell, D., Xu, R. & Tai, C.M., 2007. Balancing assessment of and assessment for learning Guide no 2. Higher Education, (2), p.15.
Liu, N.-F. & Carless, D., 2006. Peer feedback: the learning element of peer assessment. Teaching in Higher Education, 11(3), pp.279–290.
Nicol, D.J. & MacfarlaneDick, D., 2006. Formative assessment and selfregulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), pp.199–218.
Oxford, R., 2011. Teaching and Researching Language Learning Strategies Applied Li. C.N. Candlin and D.R. Hall, ed., Oxford, UK: Oxford University Press.
Teaching and Learning Centre, 2012. Self-assessment-and-peer-feedback.

Friday, April 17, 2015

Entry 55 IDT1415 - Assessment of and for learning - Balancing Assessment with Web 2.0 Tools

Hello everybody,

Here's my full contribution on this task on assessment of and for learning. After reading this week's articles and materials I created a table which aims to assess these tools as suggested in the task. I designed the table (assessment instrument) based on the inclusion of a generalised form of assessment criteria (Moon 2002a:103) by assigning a happy or unhappy face as the equivalent of a Yes/No whenever any of the tools met the criterion as defined by Haunsell et al. (2007) regarding the four strategies: feedforward assessments, cumulative coursework, better understood expectations and standards, and speedier feedback.

I must confess that I found Haunsell et al. (2007) and Carless (2007) the most interesting of the 4 resources while also found myself going back to last week's Knight's (2001). The CambridgeTV video was not very informative - maybe because I watched after doing all the readings :-) - and Huang's (2012) article I found a bit boring. Here's a link to the table.


Carless, D., 2007. Learning‐oriented assessment: conceptual bases and practical implications. Innovations in Education and Teaching International, 44(1), pp.57–66.

Hounsell, D., Xu, R. & Tai, C.M., 2007. Balancing assessment of and assessment for learning Guide no 2. Higher Education, (2), p.15.

Huang, J., 2012. The Implementation of Portfolio Assessment in Integrated English Course. English Language and Literature Studies, 2(4), pp.15–21.

Knight, P., 2001. A Briefing on Key Concepts – Formative and summative, criterion & norm-referenced assessment. , pp.1–32.

Entry 54 IDT1415 - Assessment Types and Criteria. Am I doing it right?

While reading Biggs (2003), Moon (2002) and Knight (2001) I wondered about  three things and thought I'd share them with you here. First, how free are we in our contexts in regards to assessment methods? Second, how do you assess your students in general e.g. do you apply an established method or do you have your own systems which complement the established procedure set by your institution? Third, how viable is a shift from summative-reliant assessment to a more balanced, or even better formative-reliant approach?

In my own experience, I have come to realise that my Learning Outcomes (LOs) to date are a mix between Assessment Criteria (AC) and LOs as I tend to use a mix between tentative and definite language as well as a mix between low and high level verbs with a stronger tendency towards the latter. This realisation means I will try and tidy them up from now on so that they are consistently in line with LOs or AC definitions in relation to the specific context e.g. LOs for input sessions and AC when measuring achievement. I was pleasantly surprised to see that Biggs' (2003) 4 steps for Constructive alignment are to a great extent already present in my current practice as a teacher trainer as follows:

1. Define ILOs - Intended Learning Outcomes (They're an integral part of my design procedure of input sessions and are presented and briefly discussed with candidates at the beginning of each session.)

2. Choose/design activities which will lead to ILOs (The materials for the input sessions are designed by trying to adhere to the premises of Loop Input (Woodward 2003), which is a concept I was struck by when I first came across it in 2000 and since then informs my approach.)

3. Assessing students' ILOs to see how closely they match what was intended (With specific reference to University of Cambridge Teaching Awards e.g. CELTA, DELTA and YL Extension to CELTA courses, this is more easily done as LOs and Assessment Criteria are already drawn up. Thanks to the fact that assessment on these courses is continuous and integrated it is then a matter of matching examples of achievement of these LOs to the criteria via reflection on aspects of the course such as Teaching Practice, self reflection on TP and peer assessment (Knight 2001), written assignments and performance overall in a triangulation ongoing exercise throughout the course. On these courses there is an emphasis on 'thinking about learning, teaching and assessment' (Knight 2001:8) via the input sessions and also the formative assessment which takes place through discussions of theory (input sessions) and practice (Teaching Practice - TP) which include peer and self-assessment. Peers observe one another in TP, candidates complete a written self reflection immediately after TP, and in TP feedback the tutor moderates candidates' experiences, self reflections and peer assessment contributions.)

4. Arriving at a final grade (Grade assessment criteria (Moon 2002:95) is then applied e.g. candidates can be allocated a grade (Fail, Pass, Pass B, Pass A) according to their continuous and integrated assessment performance while matching their Teaching Practice, Written Assessment and Overall Personal Performance against the criteria included in the CELTA Syllabus and from 2014 specific band descriptors which support the allocation of any given grade).

I believe that the above then give tutors a fair degree of freedom as how the criteria are applied in that there are clearly set criteria and that they are further developed by examples for each criterion as can be seen in the CELTA5 candidate booklet available on the internet - please note the link leads to the 2007 version so it's not updated, however, the criteria and criterion examples are still valid.  


Biggs, J., 2003. Aligning teaching for constructing learning. , pp.1–4.

Knight, P., 2001. A Briefing on Key Concepts – Formative and summative, criterion & norm-referenced assessment. , pp.1–32.

Moon, J., 2002. Writing and using assessment criteria. The Module and Programme Development Handbook: A Practical Guide to Linking Levels, Outcomes and Assessment Criteria, pp.79–106.

Woodward, T., 2003. Loop input. ELT Journal, 57(July), pp.301–304. Available at:

Entry 53 IDT1415 - Thoughts on Collaboration

Look for a case study in which some form of group work is part of a language course. Reflect on its design and how it was integrated with the rest of the course. Also consider whether technology played any role in the success (or not) of the activity. The point of this task is not to examine the benefits of collaboration for learning (we did that last semester) - we want to focus on its implications for course design

Abstract: This research tries to analyze the way student groups interacted and answered the proposed task in the different work groups. Beyond that, it was our objective to acknowledge how these same students evaluate the teacher’s performance in the seminar monitoring. The results of this study indicated different interaction and organization levels in the same task. Those differences had implications in the way of leading the task and in the final result. About the teacher, the students considered she had a good participation, providing the support asked, being the “facilitator” which was the more valued skill.

Description & Review of Article

The title of this study caught my attention because using forums is something we do here regularly and something I have to do as well on an online course I moderate so I thought it was contextually relevant. Unfortunately, the poor quality of the written English used in this article and the fact that it was published in ScienceDirect were negatively surprising. Goulão's (2012) study lacks precision and generally speaking it also lacks in detail thus failing, in my opinion, to give the reader a clear picture of a study that would otherwise have been very benefitial and informative.

The study sought to explore the interaction between the groups involved (6 teams divided into 2 main themes) and how they carried out the task assigned. Unfortunately, the themes or the task itself are never defined which makes it difficult for the reader to 'see' the whole picture. A second aim was to record the students' assessment of the teacher's monitoring during the seminar mentioned.  However, and yet again, there is only superficial information as to how this was done without acknowledging potential for bias or the 'halo' (Thorndike, 1920) and Hawthorn effects (Dornyei, 2007:53) in the responses from the participants.

The project

The 11 participants were divided into 6 teams while being randomly selected (Goulão 2012:673) to carry out a task that is not defined in the article. The second aim, the assessment of the teacher's monitoring capabilities, is done through a questionnaire given to the participants to complete. The period of the study is not defined either and can only be  inferred as to being confined to the duration of 'an eLearning Master's Degree seminar' (p673).


The analysis of the behaviour and self-organisation of the participants led to the identification of 3 models of interaction which are interesting, but again poorly and superficially described. These models show that 1. a participant takes a leading role organising the work; 2. a participant again initiates the work but then takes a step back and then the group carry out the task; and 3. there is no organisation of the work and although the group carries out the task in the end, there are no roles either assigned or taken for the execution of the task (p674). As regards the  analysis of the responses given by the participants in relation to the monitoring work carried out by the teacher, I would argue the results to be contradictory or at the very least incongruent with the information provided. For instance, it is reported that 77.8% of the respondents thought the 'teacher created and encouraged the learning environment' while there is no evidence to support this in the article, or as mentioned earlier acknowledgement of potential for bias. 

Sullivan Palincsar & Herrenkohl's (2002) idea of creating a shared social context to engage in collaborative learning is missing as it is the provision of explicit guidelines (Galton 2010:4). While it is true that the aim of the project was that of 'analysing the way student groups interacted and answered a proposed task' determining group membership (op.cit.) would have provided clarity for the participants and article readers. As reported in the article, it is not clear whether they were left to their own devices for the sake of the project or not, and this is especially so when looking at the results of the assessment of the teacher's performance (Goulão 2012:676) which point towards teacher involvement  in the creation of a learning environment, management of online discussion, establishing clear guidelines for learning, etc. In addition, there is no indication of creation of interdependence, dedication of time to develop teamwork skills or to build individual accountability (CarnegieMellon 2015), which I would argue could have been done implicitly and to some extent as part of the guidelines even if the aim of this project was to find out how student groups interacted when carrying out a task. In other words, more information as to the type of group work and teamwork skills development  these students have been previously exposed to as well as their understanding of individual accountability would have an impact on the interpretation of the results offered. 

As regards assessment, it is not clear whether the approach adopted was 'Product' or 'Process' (Galton 2010:5) oriented as Goulão reports that all the groups accomplished the task and also how they did it. However, information as to how they completed the task is only used to determine the models identified rather than the process or any learning taking place. Unfortunately, the information provided does not allow the reader to determine whether there was any level of intellectual engagement as described by Sullivan Palincsar & Herrenkohl (2002). Along the same lines, there is no reference as to the criteria for the assessment of the tasks completed by the participants, the application agent of the same or alternative forms of assessment (Galton 2010:6-7).


Sulliva Palicsar & Herrenkohl's (2002) work on the design of collaborative learning context, Galton's (2010) article on Assessing group Work and the best practices for designing group projects suggested by the CarnegieMellon Eberly Center Teaching Excellence & Educational Innovation (2015) site do not seem to have informed this study in any way.

On a more personal note, I believe that in line with learning theory and how memory works, this poorly written article has helped me better understand the importance of the work mentioned here as it has (forced) provided me with a good opportunity to analyse, evaluate and synthesise Collaboration theory forcing me to make use of higher order thinking skills.


CarnegieMellon Eberly Center - Teaching Excellence & Educational Innovation. (2015) [online]. Last accessed   2 April 2015 at:

Dornyei, Z., 2007. Research Methods in Applied Linguistics. Oxford, Oxford University Press.

Galton, M., 2010. Assessing group work. International Encyclopedia of Education, pp.342–347.

Goulão, M.D.F., 2012. The Use of Forums and Collaborative Learning: A Study Case. Procedia - Social and Behavioral Sciences, 46(2000), pp.672–677. Available at:

Sullivan Palincsar, A. & Herrenkohl, L.R., 2002. Designing Collaborative Learning Contexts. Theory Into Practice, 41(1), pp.26–32.

 Thorndike, E. L. (1920). The Constant Error in Psychological Ratings. Journal of Applied Psychology, 4, 25-29 in: Cherry, K. 2015. What is the halo effect? [online]. Last accessed 2 April 2015 at:

And my reflection on the questions posed...

Consider whether technology played any role in the success (or not) of the activity. 

The Goulão (2012) case study could not have been implemented without the use of technology as participants had to make use of Forums in order to complete the task assigned and which constituted the basis for the observation of behaviours. In this sense it could be argued that the study was successful as the participants completed the tasks as reported in the article. Unfortunately, the amount of information provided in the article does not allow for the formation of a clear picture as to which platform was used, for how long, the type of forums, the type of task and the guidelines if any given to the participants.

Reflect on its design and how it was integrated with the rest of the course.

As above, clarity as regards the design of the study is wanting as very little detail is given. The project included students completing a seminar part of a module in an eLearning's Master Degree. It is known that there were 11 participants aged between 29 and 52, but there is no indication as to their level of proficiency in IT, their background or their course of studies other than that 'they attended the Intercultural Social  Psychology subject'. In addition, it is not clear how this study fits in the overall course of studies or timetable as the context information given is very limited. Likewise, it is not clear whether the results of the study informed the researcher's current or future practice, course design or learning outcomes.

Focus on implications of collaboration for course design.

The implications of collaboration for this study were at the heart of the paper as the researcher's main aim was to 'analyse the ways student groups interacted and completed the proposed task'. However, this case study seems to position itself at the beginning of an exploration of collaborative behaviours in order to understand and identify these rather than to ground course design on the implications of collaboration. Nonetheless,  the introduction to the article would seem to indicate an attempt by the author to provide the theoretical grounds for the study which falls short as it provides a report on theory on collaboration rather than an academic argument for the study.

Monday, March 16, 2015

Entry 52 IDT1415 - OERs Evaluation Criteria

TASK - Think about evaluation criteria as providing you with the questions you should be able to answer before you decide that a source of information is worth using. The criteria should be simple, easily answered and not too long – no more than ten different headings and no more than six questions under each one.

Thanks for your very visual mind map Barb! I must confess that I initially felt a little bit at a loss as I couldn't find the examples given for the task and so your mind map helped me see light and get started. I like how you distributed your ideas and develop them inside the bubbles. Here's my contribution which is not as visual, but which works for me and hope you and the others will also find something useful. As usual, I thought of my criteria from the point of view of applicability to my own context.

When I started thinking about the evaluation criteria which would help me discern between the good and the bad OERs, Entry 10 Are you driven by technology or pedagogy in your teaching? in my Reflective Blog immediately came to mind as I offered a list of questions which I often ask myself when deciding whether a tool is to be integrated into a lesson or session and which have become a sort of self assessment criteria. I believe these questions could also be implemented, adapted and reformulated in addition to others as shown below and with an in-class implementation approach part of courses delivered F2F or in a blended format as these are the only options currently available. 

This is by no means a comprehensive list and will be revisited and integrated through more reading and contributions from the others in the forum. It is simply one possible list which outlines the main criteria I believe to be relevant for my context.

Learning Outcomes
  • What is/are the stated outcomes for OER?
  • What part of my lesson would be more engaging and cognitively challenging if I added OERs?
  • Once the above is clear, what tools are required by the OERs?
  • Do I know them already or remember seeing them somewhere which can help me enhance my lesson? Once there is an answer to this question, try and test the tool.
  • Will the lesson be the same without OERs? If yes, discard it. If not, ensure it is recyclable.
  • Will my students need training? If yes, then how am I going to give it e.g. a quick screen cast? In or outside class? With lower levels usually in class to 'show' them what I mean. If not, what do they need to know to be independent enough to complete the self training stage before the next lesson?
  • What are the technical requirements of the OERs? Will all my students have access to it?
Language Learning
  • How is the students' use of the language and learning experience enhanced by these OERs? Identify this before moving on otherwise it'll be infotaiment!
  • Is the TL for this specific lesson clearly identifiable? Will the students be able to see it and use it?
Learning & Pedagogy
  • Remember Confucius: The more they 'do', the more they will 'understand'.
  • Is the OER cognitively engaging or physically involving (mechanical)?
  • Is the pedagogical approach behind the OERs identifiable?
  • Is it 'little' OERs or 'big' OERs? If little is a profile of the author available and reliable? If big, what kind of institution is it e.g. educational, commercial, governmental?
  • How visible are the OERs in the field of education?
  • Are they part of a network? Is it 'alive'? Dated?