Thursday, 16 November 2017

Digital Innovation Partnerships

The new University Digital Innovation Partnership (DIP) programme enables students to work in partnership with staff to enhance and encourage the use of digital technology in learning and teaching within the curriculum. Staff and students jointly identify areas within their learning and teaching environment where the use of digital technology could make a positive contribution and work together to design, implement and evaluate a digital practice within learning and teaching. DIP is designed to foster digital literacies amongst staff and students within the context of the learning and teaching. It will do this by helping both staff and students to develop their digital literacies through the meaningful development of pedagogic projects. The scheme involves a three way partnership between a student (Digital Associate) who has confidence and experience in a range of digital tools and approaches, and a member of teaching staff (Digital Innovator) who wishes to try out a digital practice in their teaching. Lastly, Digital Advocates are members of teaching staff who are confident early adopters of digital approaches and can offer insightful advice on the practicality of implementing an idea. The partnerships will be further supported by Leicester Learning Institute and IT Services through a schedule of light-touch events across the year.

On 15th November LLI ran an introductory workshop for the three Digital Innovation Partnerships the School of Biological Sciences is running in the pilot programme for this scheme. These are:

  • Dr Kath Clark & Yasmin Wabara (Year 2): Digitization of metabolism teaching.
  • Dr Primrose Freestone & Vishal Chady (Year 1): Introducing concepts of molecular diagnostic genomics into the Year 1 curriculum.
  • Dr Shaun Heaphy & Tally Came (Year 1): Using TopHat to improve student understanding of threshold concepts in the Year 1 curriculum.
We will be reporting on the progress of these pilot projects throughout the year. For more information, please contact DIP@le.ac.uk. If you are interested in being involved in this new opportunity, please complete the application form by 12 noon on Monday 20th November.



Monday, 6 November 2017

Celebrating more funding success

We are pleased to report that this bid submitted to the University Teaching Enhancement Project Fund was successful:

Pedagogy Involving Capture Technology (PICT): Uses of Panopto beyond the recording of lectures (Stream A)

Project Team:
Project leaders: 
Dr Chris Willmott, Dept of Molecular and Cell Biology
Mr Matt Mobbs, Leicester Learning Institute
Collaborators:
Dr Richard Blackburn (Chemistry)
Dr Tracey Dodman (Criminology)
Dr Stewart Fishwick (Geography, Geology & the Environment)
Dr Simon Vaughan (Physics)

Panopto lecture capture technology (known locally as Reflect) has quickly become established as a popular and effective tool for recording lectures, which can be watched by students at a later time of their choosing. The same technology can readily be employed in a variety of other non-lecture contexts to generate valuable educational resources. The aim of this project will be to investigate existing ways in which capture technology is being used to enhance the education of students. We will evaluate these uses, giving consideration to the characteristics that make a particular application successful and, if not, how refinement might improve engagement with the resource. As part of the work, tried-and-tested uses will be rolled out to new departments, and a “good practice” guide with recommendations for varied and effective applications will be prepared.


Thursday, 2 November 2017

Writing the Rubrics

A number of people in the School of Biological Sciences have been involved in writing assessment rubrics (i.e. performance criteria) recently. It’s not until you start writing these that you find out how difficult they are to compose well! I recently came across the work of Jennifer Gonzalez who has helpfully drawn some diagrams of different rubric types:

Holistic, Analytic, and Single-Point Rubrics


Holistic rubric
The main advantage of a holistic rubric is that it’s easy on the teacher. The main disadvantage of a holistic rubric is that it doesn’t provide targeted feedback to students.



Analytic rubric
The main advantage of the analytic rubric is that gives students a clearer picture of why they got the mark they did. It is also good for the teacher, because it gives her the ability to justify a score on paper, without having to explain everything in a later conversation. Analytic rubrics have two significant disadvantages however: creating them takes a lot of time, and students won’t necessarily read the whole thing.



Single-point rubric
A single-point rubric is similar to an analytic rubric, because it breaks down the components of an assignment into different criteria. What makes it different is that it only describes the criteria for proficiency; it does not attempt to list all the ways a student could fall short, nor does it specify how a student could exceed expectations. The argument is that single point rubrics allow for higher-quality feedback, because teachers must specify key problem areas and notable areas of excellence for that particular student, rather than choosing from a list of generic descriptions.


Thursday, 26 October 2017

Celebrating Funding Success

We are pleased to report that this bid submitted to the University Teaching Enhancement Project Fund in September was successful:

Research into practice: embedding best practice for the use of Turnitin GradeMark through student-staff partnership. (Stream B)
Alan Cann, Tessa Webb, Robin Green Department of Neuroscience, Psychology and Behaviour; Caroline Smith, Leicester Learning Institute.
Abstract:
We have completed an audit of feedback practices using Turnitin GradeMark in the Schools of Biological Sciences and Psychology. We found that students who get higher marks receive more positive feedback; in contrast, there is no significant correlation between the mark awarded and the number of negative comments - overall, negative comments outnumber positive comments by >5:1. Students who get higher marks receive less feedback, students who get lower marks receive more feedback and therefore, as the overall tone of feedback is predominantly negative, students who get lower marks receive more negative comments. Clearly signposted feed-forward is lacking. In order to mitigate these issues we propose an evidence-based intervention centred around the establishment of a student-staff partnership to devise consistent GradeMark feedback templating and to improve staff awareness of student perceptions of feedback. This will be achieved by staff and students working in partnership to create, test and disseminate a bank of GradeMark feedback comments which are mutually understood and helpful. The partnership will also assist in the production of online staff training materials which will be rolled out and the impact evaluated in comparison with the benchmark data we have already gathered.

Wednesday, 18 October 2017

My First TopHat Experience

Yesterday I ran my first live TopHat session in front of students. I had practiced in an empty lecture room but I was still quite nervous about going live for the first time - fiddling around with USB drives to run the TopHat app is not the ideal preparation for a lecture, but in the event TopHat ran without problems so the purpose of this is to capture my thoughts on using it for the first time.

The AMS data for the session was as follows:
Expected: 291
Present: 177
Absent: 114
From the TopHat data downloaded after the session:  102 students entered the session = 58% participation out of the potential audience. I don't currently know why the remaining students did not participate - due to lack of a suitable device, registration issues, or other reasons?

For this session on experimental design I used three different question formats:

Question
Format
Number
% Participation
1
Discussion
27/102
27%
1b
MCQ
80/102
78%
2
Discussion
26/102
26%
2b
Word answer
15/102
15%

Far more students answered the MCQ than the other formats - was this due to familiarity?

Overall participation was as follows:

Questions answered
No. students
%
0/4
19
19%
1/4
43
42%
2/4
20
20%
3/4
15
15%
4/4
5
5%



Apart from greater participation in the multiple choice question, there is no obvious pattern which questions students answered and which they did not:


Overall, I was quite pleased with the way this session ran. The Discussion format questions worked well and generated a buzz in the room. I won't know what the students think of the experience until I ask them on the end of module questionnaire but I am planning to use TopHat again on this module and to collect more data at that point.

See also: TopHat Tips - developing best practice for use of TopHat



Alan Cann.


Thursday, 28 September 2017

TopHat Tips - developing best practice for use of TopHat

TopHat Through contributions from many people we are gradually building up guidelines for the most effective use of TopHat:


DO set challenging questions which make students think rather than trivial questions simply to use TopHat.

DO allow adequate time for discussion of student-generated data in each session rather than just rushing through your slides.

DO practice using TopHat in a teaching room before you go live in front of students for the first time!

DO be aware that TopHat generates a high level of background noise in class discussions - be aware of neighbouring classes and use the microphone to ensure you can be heard at the back.

DO make students aware that because TopHat is a commercial product that the University pays to use they have been registered using their University identities and are traceable. Submissions which violate the University's Internet Code of Practice, Use of Computers Policy, Statement Concerning Harassment and Discrimination, and the Regulations for Students can result in disciplinary action.

DON'T overuse it - don't use TopHat in every session you teach and don't try to cram too much into each session - students will get bored, stop participating and their attention will wander off topic.

DON'T limit yourself to MCQs - other formats such as Word Answer questions which allow word clouds to be generated offer valuable opportunities for in class discussion of student understanding and misconceptions.

More: https://leicesterbiosotl.blogspot.co.uk/search/label/TopHat

Monday, 25 September 2017

LLI Focus On: Effective Marking and Feedback Strategies

Jon Scott Notes from the LLI Focus On session: Effective Marking and Feedback Strategies, 13.09.17.
Chaired by Phil Marston, LLI.


Jon Scott PVC Student Experience, Latest NSS & TEF results

UoL TEF Assessment & Feedback 68.3% vs benchmark 70.8% - flagged because of 2% divergence. NSS 2017  UoL ranked 114th out of 128.
Outcomes are uneven across the University. Student complaints about marking criteria. Communication problem with students.
2017-18 Peer Observation of Marking & Feedback, all staff. Data used for support for promotion - teaching excellence.


Colin Hewitt, Chair of the Assessment & Feedback Working Group: The University’s new Assessment & Feedback Strategy
Assessment is a strategic issue. New Strategy describes:
Underlying principles
Encourage shared language (staff & students)
Clarifies roles & responsibilities (staff & students)
6 commitments, 6 design principles, plain English, self assessment tools:

  1. Assessment for and of learning
  2. Coherent assessment at programme level
  3. Assessment literacy
  4. Assessment workload
  5. Authentic assessment, threshold standards, stratify achievement
  6. Align with Digital Campus Strategy



Phil Marston Practical Approaches to Improving Assessment & Feedback
Anatomy of a marking scheme:

  • ILOs
  • Assignment
  • Criteria - performative (what is required, e.g. Abstract)
  • Rubrics - evaluative (assessing quality, e.g. "Abstract covers whole submission")

Some difficulties with terminology here - in this context rubric means the instructions students are given for an assignment, NOT a marking rubric as defined by GradeMark. Note that criteria and rubric are separate! LIGHTBULB MOMENT! This is not common practice in the School of Biological Sciences where we have a tendency to publish assessment criteria and expect students to reverse engineer these to construct a rubric.



Alan Cann