Archive | May 2016

SEDA Conference on Learning and Assessment (May 2016)

I attended the SEDA Spring Conference on Teaching Learning and Assessment, which was held in Edinburgh on 12-13 May.

This is the link to all the tweets for the day , and this is the link to my own tweets.

It was a great conference, as expected, and this is my account of the two days.

 

Ian Pirie – Assessment and Feedback

Ian’s keynote was focussed on the challenges in implementing good practice in assessment and feedback. His slides were imbued with great references to work done in the field and, hopefully, they will be online soon.

  • Ian highlighted the importance of tracking students online, so that educators can evaluate what works and what doesn’t (Wayne Smutz, 2013).
  • Talking about online learning, Ian stated that students want access to information online 24/7, and all in one place. Thus, designing online content should always be done with students in mind.
  • Ian also highlighted the role of assessment to develop employability skills and what employers want out of graduates. I have some more reservations about this topic, as I still believe that universities should take what employers want with a pinch of salt. Our role is delivering academic skills, the ability to learn, not the learning of specific skills.
  • I really liked Ian’s idea that assessment should be both ‘economically efficient’ as well as ‘academically efficient’. This is a very pragmatic and effective idea: we need to make it good, but also feasible.
  • The big problem with assessment is that it usually fails to meet students’ expectations, and it can drive to wrong behaviour. A few examples of this are driven by:
  1. The symbolic use of marking (do we really need a 100 points scale? What does it mean to the students?) Students tend to have a simpler marking rubric in their mind: “I did well, I did good, I did ok, I did awful”. But often this scale is misaligned with the mark they get.
  2. Structured assessments, full of sub-parts, and sub-marks that average each other do not give a clear picture of where students needs improvement, and just drives a strategic behaviour based on compensation and hitting desired threshold, stripping the complexity of the learning process. (Grades can conceal actual performance).
  3. We should have in mind that we are assessing the work, not the person. We should convey this feeling to the student as much as we can.
  4. Sometimes the needs of good assessment design and good marking clash with institutional constraints based on turnaround time: a typical issue within a constructive alignment perspective.
  • Ian said that feedback is not effective if there is no evidence of its consequences. Another powerful concept. We often talk about relating assessment with learning outcomes, but do our students know what the learning outcomes are? (Susanne Orr). A way to encompass this problem is to have students rewriting learning outcomes in their own words to contextualise them, in partnership with staff.
  • Ian suggests that we should use a ‘graded learning profile’, a form of learning portfolio with minimal aggregation of marks and a clear understanding of the skills attained on the student’s side.
  • Myth busting: it is not true that students will work to meet minimum requirements, if the assessment process if constructively aligned.

There is much more I could write about this great keynote, I thoroughly enjoyed it.

 

Sally Brown and Kay Sambell – Using Exemplars

Sally (@ProfSallyBrown) and Kay explored the use of exemplars to support students in developing assessment literacy. Their key message was that the use of pre-emptive formative assessment can be crucial in the development of unconscious and tacit knowledge about the assessment process. They find that the ‘failing exemplar’ (and example of poor quality work) was even more helpful than good quality one in supporting students’ assessment literacy. My colleague Phil Long emphasized that good assessment literacy should include: ideas, connections, and extensions, and that we should guide students through these stages. The slides of Sally and Kay are available here. They have also provided a useful assessment literacy bibliography.

 

Deena Ingham – Student Settings Summative Assessment

Deena (@DeenaI) challenged us to think about the provocative idea of having students to design their own assessment. Something that belongs the Learning Contract Design literature.

Deena based her talk on the idea that we can harness self-authorship (Baxter-Magolda) in assessment and use intrinsic motivation in delivering good work as a much more powerful driver than extrinsic motivators. According to the principles of self-authorship, student choose and write the criteria for assessment and the learning outcomes the assessment is going to address.

 

Laura Ritchie – Linking Skills Assessment and Feedback

Mighty Laura (@laura_ritchie) emphasized the importance of student agency in assessment design and delivery. We need to help our students to question the mode according to which they perform tasks, and of course we need to help them to understand the tasks, acquire the necessary skills and believe in themselves. (How could self-efficacy not come up when Laura is in the room?) In Laura’s talk, assessment is seen as a ‘criterion validated reflection’: what a beautiful definition! Laura asked us to define what an essay is, and she shared her view that desired features of an essay should be: excellence, reflection, creativity, and learning. I fear creativity is the hardest one to harness, but it is also true that this feature is the most rewarding to discover in our students’ work. Laura has already written a great blog account of her session, well worthy a read.

 

Linda Robson –  How Does Student Attainment Influence Feedback?

Linda presented the results of her research on how essay feedback to students relates to the mark awarded. Laura introduced us to Brown and Glover’s classification of feedback comments, ranking comments motivationally (positive, negative, neutral) and practically (through indication, correction, and explanation). Laura then presented her work showing the correlation between marks and proportion of positive and negative comments. This is fascinating research, which probably needs to be made a little more robust with larger sample sizes, and more structure in the empirical analysis.

 

Y1Feedback: Technology Enhanced Feedback Approaches for First Year

A large team of researchers (@y1feedback) from Ireland talked us through the material generated by a project aimed at evaluating different approaches to Technology Enhanced Learning approaches to address the needs of First Year students. The First Year focus harnesses the research to the quest for ways to support transitions through TEL. Can digital technologies open ways to enhance feedback for First Year students? The researchers highlight that the feedback experience of first year students is often inconsistent. They advocate that good feedback needs to both formal and informal, be feed-forward oriented, and be based on a dialogue. They compiled a wide range of case-studies and they observed that the challenges emerging from their surveys of TEL approaches are: (i) truly dialogic feedback is hard to implement, (ii) the potential of technologies is sometimes high to realise, and (iii) the problem of competing priority in feedback deliver is always present.  Their website is full of great resources. I think the next stage of their project should consist of distilling what works better and what doesn’t, creating a menu from which teachers can choose the pedagogies and tools more suited to their needs.

This was the end of the first day.

Margaret Price – The Feedback Conundrum

Margaret Price has a similar experience to mine (even though she is much more established than me, of course). She comes from the Business disciplines, but she migrated her research into education and pedagogy. Margaret started her keynote talk on the premise that feedback does not seem to have much of an effect.

  • The discourse on assessment seems rather unsophisticated and superficial to her: issues with fairness, cheating, and grade inflation, are always on the agenda, but these are not the core issues.
  • Margaret touched a little on the anonymity as she remarked this was not even much of a debated issue. I must agree with her, it has always been imposed on me…and I hate it!
  • She digs deeper when she said that there is a problem of collusion between staff and students to keep assessment and feedback as they are. More traditional forms of assessment tend to be taken as granted.
  • Margaret remarked that we need to take a programme approach to feedback, not a piecemeal approach. This recalls our opening keynote by Ian Pirie. It led me to think that the constructive alignment theory can show us that sometimes learning and teaching practices are perfectly aligned…but on the wrong path!
  • In line with the presentation on exemplars, we were challenged to reflect on the fact that assessment criteria are assumed to be explicit, but they are still imbued with tacit knowledge that can only be shared by exposure and experience.
  • Marking consistency is another issue: there is huge mark variation, especially in essays. Phil Race tweeted that essays are good for giving feedback, but we should not mark them.
  • What is the impact of feedback? What would generate good impact? Possible answers are: student engagement, understanding issues, experiencing a relational approach, and affecting self-efficacy.
  • What makes good feedback ‘good’ then? Margaret suggests using student-researchers to find out. She claims that we have 3 success factors: (i) technical (presentation), (ii) particularity (personal and engaged feedback), and (iii) recognition of student effort (including the level of detail in feedback).
  • In terms of context-based criteria for good feedback, we can account for: (i) assessment design, (ii) pre-conditions, and (iii) marker predictability. (Timing is not perceived as a big issue).
  • In terms of expectations, we need to account for: (i) mark expectations, (ii) student epistemology, resilience, and beliefs.

Margaret concluded her talk suggesting that there is unexploited scope for assessment and feedback in the area of student development. This is quite a broad concepts, and I will need to think it over.

 

Shona Robertson – Marking Time: Using rubrics for self-assessment and marking

This presentation discussed the role of using marking rubrics in TurninUK. It was interesting to hear the pros and the cons of using TurnitinUK. It seems that no platform is perfect. Features that caught my attention:

  • Rubrics can help students to self-assess.
  • TurnitinUK links feedback comments to learning outcomes.
  • However, TurnitinUK doesn’t allow to have different rubrics on the same website. Need to set a different one for each rubric.
  • An advantage of Blackboard is the ability of writing in the rubric itself to customise it.

 

Ourania Ventista – Self-Assessment in Massive Open Online Courses

This presentation highlighted the problem of MOOC attrition rates: low engagement and patriotic bias in peer-marking (peers from similar background/countries tend to mark each other higher).

  • Coursera addresses the problem using a “calibrated peer-assessment” system.
  • MOOCs also have an element of self-assessment (after receiving the peer-assessment).
    Self-assessment is still underrated in literature. However, peer-assessment is still affected by a lot of attrition.
  • A controversial question is: should ‘effort’ be included in the marking rubric? In my opinion this can be done only by specifying the evaluation criteria for this.

 

Napier University – Enhancing Assessment and Feedback: using TESTA

Tansy Jessop – Changing the Culture of Assessment and Feedback through TESTA

Two presentations (and keynote) of the day were based on the development of the TESTA programme. This is a project that articulates across auditis, experience questionnaire, focus groups, case studies, workshops, and a range of resources designed programmatic review of assessment and feedback practices. Highlights from these talks are the following:

  • The shift in perspective should lead to move: (i) from “my module” to “our programme”, (ii) from teacher to student-centred, (iii) from NSS to enhancement.
  • In terms of NSS, key concepts were: knee-jerk, and coping with poor performance through a spit and polish approach.
  • In terms of curriculum design, it was emphasized that content and knowledge are dominant, but there is little training on curriculum design.
  • TESTA highlighted that the tendency is using misguided assumptions to interventions in teaching and learning: the ‘academic’ approach of metrics, analysis of data. This needs more attention (and partnership with the students).
  • I just observed that receiving feedback is an emotional process and we should acknowledge that.
  • The issue of staff workload in marking was an important one. An interesting piece of feedback from staff: “Using online marking gave me back my Christmas holidays”.
  • Reducing staff workload is a good incentive to buy staff into innovation, but sometimes staff does not see the advantage of technology.
  • It was advocated that reducing summative assessment and increasing formative assessment can reduce workload, but I would disagree (unless we conduct one worse than the other).

 

That is all for now!