Thursday, 15 February 2018

Rubrics North of the Border

https://upload.wikimedia.org/wikipedia/commons/8/83/Rubric.jpg
Earlier this week I was in Edinburgh to give a seminar on GradeMark and feedback. While I was there I had an interesting conversation with Paul McLaughlin (Associate Director of Teaching) about using rubrics to speed up marking and the delivery of feedback. Paul has used GradeMark rubrics extensively with large class sizes and passed on a number of useful tips.

As anyone who has ever tried to write a GradeMark rubric knows, the format is rigid and rather limiting since it demands a square matrix, e.g. 4x4 or 5x5, which isn't always what is required. Paul's first tip is that it is possible to write much more flexible assessment schemes by padding the matrix with dummy marks. For example, you may wish to grade four criteria of a written submission using five grades (each with associated feedback), but to grade an oral presentation which forms part of the same assessment, you may need a different shaped matrix, e.g. three criteria (clarity, content and responses to questions) with six or eight grade descriptors. It is possible to construct such a matrix and therefore to generate a mark for a multipart assessment. This is done by padding out the matrix to fill the unused grades by simply duplicating the same mark several times and leaving the feedback section black, since it won't be used. If you've never used GradeMark rubrics, this description may be as clear as mud, but if you're ever tried to write a rubric which covers the assessment scheme you want rather than the rigid one GradeMark forces on you, it's a life saver, and also makes it much easier to adapt existing marking schemes to a rubric.


Paul's second tip involves structuring QuickMarks to speed up the marking of assignments. For a well structured written submission such as a lab report, Paul writes a series of QuickMarks which follow the structure of the assessment. For example, this might be something like:
01.1 Introduction Feedback comment (Excellent…)
01.2 Introduction Feedback comment (Very good…)
01.3 Introduction Feedback comment (Good…)
01.4 Introduction Feedback comment (Satisfactory…)
01.5 Introduction Feedback comment (Weak…)
02.1 Mathods Feedback comment (Excellent…)
etc.
By numbering the QuickMarks in this way it is easy to work through them in sequence and this greatly reduces the time spent typing repetitive Bubble or other inline comments. The feedback is rounded out by a more individualized comment in the GradeMark Text Box.


Lastly, if you've ever sat down to write a rubric from scratch, you may, like me, have run head on into the lack of inspiration brick wall. Paul has made extensive use of the large collection of rubrics available online at: https://www.rcampus.com/indexrubric.cfm
It's unlikely that you'll find a rubric which fits your exact needs but this resource is a great place to start and by adapting pre-existing scheme you can save a lot of time.


Alan Cann






Thursday, 1 February 2018

Stages of SoTL - Reading the Literature

As we heard from Anne Tierney's talk, the first stage of SoTL is to develop a theoretical basis for your work. In part, this comes from reflection, but mostly it comes from reading and engaging with (including contributing to) the education research literature - standing on the shoulders of giants.
But reading the education research literature is not easy. For a start, where do you look? PubMed won't help you with this. A good place to start is of course to talk to other people and ask for recommendations, but more generally, the default database for PedR is Google Scholar. Here's an example:



If you're completely in the dark, citations will give you some clues about what you might read:



And help you find it:



After that – talk to someone and discuss your ideas, or ask them to read your draft conference abstract, grant application, or manuscript, for suggestions. Approach a colleague directly or ask for help via: bs-sotl@lists.le.ac.uk









Wednesday, 17 January 2018

Students’ emotional responses to feedback

It's not just what you say, it's the way that you say it...

There is a pre-existing research literature on students’ emotional responses to academic feedback, and like previous articles, this new paper indicates both how strong reactions can be, and how likely feedback is to be absorbed. Our (Alan Cann, Tessa Webb, Robin Green, Caroline Smith) current TEPF-funded project is focusing on how to write better feedback by closing the gaps in the understanding between students and staff. Over the next few weeks we'll be running student-staff workshops to explore this issue and to devise strategies for better communication and therefore, more effective feedback. You may be invited to participate in one of these workshops, but if you're particularly keen to be there, book your place now by emailing one of the project team.


Tracii Ryan & Michael Henderson (2017) Feeling feedback: students’
emotional responses to educator feedback, Assessment & Evaluation in Higher Education, DOI:
10.1080/02602938.2017.1416456

Assessment feedback allows students to obtain valuable information about how they can improve their future performance and learning strategies. However, research indicates that students are more likely to reject or ignore comments if they evoke negative emotional responses. Despite the importance of this issue, there is a lack of research exploring if certain types of students are more likely to experience negative emotional responses than others. This study builds on extant qualitative studies through a quantitative examination of two previously identified student variables: different citizenship backgrounds (domestic and international) and different grade expectations (higher or lower than expected). The participants were 4514 students from two Australian universities. Analysis of survey data revealed that, regardless of language experience, international students were more likely than domestic students to find feedback comments to be discouraging, upsetting and too critical. Students who received grades lower than they expected on a particular assessment task were more likely than students who received grades higher than they expected to feel sad, shameful and angry as a result of the feedback comments. This paper concludes with several recommendations, including the need to modify assessment feedback practices in order to be sensitive to different student cohorts.



Wednesday, 22 November 2017

TopHat Tuesday

I used TopHat again this week with a total of 470 Year 1 and Year 2 Biological Sciences students. In both cases students were requested to bring their TopHat login details and an internet-capable device to the lecture with them.  Data collected from both sessions is shown below.  At the start of both lectures students were shown an PowerPoint slide stating:
"We will be using TopHat in this lecture. Join Code  xxxxxx. You should have already registered for TopHat following the instructions sent to you by the School of Biological Sciences Office. Because TopHat is a commercial product that the University pays to use, you are registered using your University identity and are not anonymous.  Submissions which violate the University's Internet Code of Practice, Use of Computers Policy, Statement Concerning Harassment and Discrimination, and the Regulations for Students can result in disciplinary action."

Year 1: BS1040
This was the first use of TopHat for our Year 1 students. In a lecture on BS1040 on emerging infectious diseases, I used one MCQ and one discussion format question. As previously, participation in the MCQ format question was much higher (91%) than in the discussion question (35%).  In spite of the warning given at the beginning of the lecture, 16/63 comments (25%) posted on the discussion question were facetious,  e.g.
"Go to the Winchester, have a nice cold pint, and wait for all this to blow over."
"The ting go skrrrra pop pop pop pop."
While not serious, this caused some disruption and distracted attention from the topic, causing much chattering and laughter in the room when these comments were posted. I am also conducting  a sentiment analysis on this group (ongoing) but initial results indicate a mixed response with an overall positive bias but including comments about disruption.

Year 2: BS2000
This was the second or third exposure to TopHat for these students. In a lecture on poster design I used TopHat to show examples of posters to students and asked them to work through the assessment criteria and make their own assessment of strong and weak points of three examples. Sentiment analysis on the use of TopHat will be included in the module questionnaire so is not yet available. In spite of the warning given to students at the beginning of the lecture, once again facetious, and in one case offensive, comments were posted by students.  Although these were fewer in number than in the Year 1 trial (4/256 comments), the nature of the content posted caused considerable disruption in the room and I received an email from one student the following day asking us not to abandon the use of TopHat because a small number of students choose to be disruptive.

Data from both sessions:
Data table

A.J. Cann.

Thursday, 16 November 2017

Digital Innovation Partnerships

The new University Digital Innovation Partnership (DIP) programme enables students to work in partnership with staff to enhance and encourage the use of digital technology in learning and teaching within the curriculum. Staff and students jointly identify areas within their learning and teaching environment where the use of digital technology could make a positive contribution and work together to design, implement and evaluate a digital practice within learning and teaching. DIP is designed to foster digital literacies amongst staff and students within the context of the learning and teaching. It will do this by helping both staff and students to develop their digital literacies through the meaningful development of pedagogic projects. The scheme involves a three way partnership between a student (Digital Associate) who has confidence and experience in a range of digital tools and approaches, and a member of teaching staff (Digital Innovator) who wishes to try out a digital practice in their teaching. Lastly, Digital Advocates are members of teaching staff who are confident early adopters of digital approaches and can offer insightful advice on the practicality of implementing an idea. The partnerships will be further supported by Leicester Learning Institute and IT Services through a schedule of light-touch events across the year.

On 15th November LLI ran an introductory workshop for the three Digital Innovation Partnerships the School of Biological Sciences is running in the pilot programme for this scheme. These are:

  • Dr Kath Clark & Yasmin Wabara (Year 2): Digitization of metabolism teaching.
  • Dr Primrose Freestone & Vishal Chady (Year 1): Introducing concepts of molecular diagnostic genomics into the Year 1 curriculum.
  • Dr Shaun Heaphy & Tally Came (Year 1): Using TopHat to improve student understanding of threshold concepts in the Year 1 curriculum.
We will be reporting on the progress of these pilot projects throughout the year. For more information, please contact DIP@le.ac.uk. If you are interested in being involved in this new opportunity, please complete the application form by 12 noon on Monday 20th November.



Monday, 6 November 2017

Celebrating more funding success

We are pleased to report that this bid submitted to the University Teaching Enhancement Project Fund was successful:

Pedagogy Involving Capture Technology (PICT): Uses of Panopto beyond the recording of lectures (Stream A)

Project Team:
Project leaders: 
Dr Chris Willmott, Dept of Molecular and Cell Biology
Mr Matt Mobbs, Leicester Learning Institute
Collaborators:
Dr Richard Blackburn (Chemistry)
Dr Tracey Dodman (Criminology)
Dr Stewart Fishwick (Geography, Geology & the Environment)
Dr Simon Vaughan (Physics)

Panopto lecture capture technology (known locally as Reflect) has quickly become established as a popular and effective tool for recording lectures, which can be watched by students at a later time of their choosing. The same technology can readily be employed in a variety of other non-lecture contexts to generate valuable educational resources. The aim of this project will be to investigate existing ways in which capture technology is being used to enhance the education of students. We will evaluate these uses, giving consideration to the characteristics that make a particular application successful and, if not, how refinement might improve engagement with the resource. As part of the work, tried-and-tested uses will be rolled out to new departments, and a “good practice” guide with recommendations for varied and effective applications will be prepared.


Thursday, 2 November 2017

Writing the Rubrics

A number of people in the School of Biological Sciences have been involved in writing assessment rubrics (i.e. performance criteria) recently. It’s not until you start writing these that you find out how difficult they are to compose well! I recently came across the work of Jennifer Gonzalez who has helpfully drawn some diagrams of different rubric types:

Holistic, Analytic, and Single-Point Rubrics


Holistic rubric
The main advantage of a holistic rubric is that it’s easy on the teacher. The main disadvantage of a holistic rubric is that it doesn’t provide targeted feedback to students.



Analytic rubric
The main advantage of the analytic rubric is that gives students a clearer picture of why they got the mark they did. It is also good for the teacher, because it gives her the ability to justify a score on paper, without having to explain everything in a later conversation. Analytic rubrics have two significant disadvantages however: creating them takes a lot of time, and students won’t necessarily read the whole thing.



Single-point rubric
A single-point rubric is similar to an analytic rubric, because it breaks down the components of an assignment into different criteria. What makes it different is that it only describes the criteria for proficiency; it does not attempt to list all the ways a student could fall short, nor does it specify how a student could exceed expectations. The argument is that single point rubrics allow for higher-quality feedback, because teachers must specify key problem areas and notable areas of excellence for that particular student, rather than choosing from a list of generic descriptions.