#ELIXIR – EXCELERATE Train-the Trainer subtask
-
Using questionnaires to promote peer instruction and content delivery;
-
[Design of MCQs with distractors] (#design)
-
[Assessment of training quality, participant and instructor performance;] (#assessment)
-
[Systematic Feedback;] (#systematic)
-
[(Short and long term) Post-course feedback.] (#short)
##Feedback to and from learners
"Usually, teachers give feedback to their students, while trainers receive feedback from the learners;"
We have to be aware of the difference between giving feedback to learners and receiving feedback from learners. Both types of feedback are useful but have different purposes.
###Challenge 1: what kind of feedback/assessment do you know as a learner?
- What type of assessment did you undertake as a learner?
- What was its purpose in your opinion?
- Was it useful to your learning?
Write at least one example and discuss it with us.
##Feedback to learners
Feedback to learners is anything we do to help both ourselves, the instructors, and learners to get information about whether learning is occurring (if during the teaching) or has occurred (at the end of the teaching). Grades are an example of a type of feedback we can give to students to inform them how they performed in a test or exam. Grading, on the one hand, informs instructors whether the learning took place and whether learners are ready to move on and, on the other, should make learners aware of the knowledge and mastery they have atteined by the time they took the test.
Feedback to learners can be summative or formative.
Summative assessment. An exam or a test at the end of a course is an example of summative assessment. Summative assessment is aimed at evaluating learners' performance at the end of teaching (this could be at the end of a topic, a session, or at the end of the entire course). This is the most frequent type of assessment occurring in schools and universities and usually includes grading. It is less frequent in training.
Formative assessment. Formative assessment takes place during teaching and learning. Its purpose is to help both instructors and learners to become aware of what the focus should be.
"Classroom assessment's purpose is to improve the quality of student learning, not to provide evidence for evaluating or grading students. The assessment is almost never graded and are almost always anonymous." (from From Angelo & Cross, Classroom Assessment techniques, a Handbook for College Teachers)
Formative assessment can be used to collect information about learners'
- prior knwledge
- mental models
- level of mastery of the topic at hand
- goals and objectives
- frequent mistakes
And can help understand
- which knowledge gaps need to be filled before moving on
- whether their mental models are correct
- if the level of mastery is sufficient according to the course's learning objectives and outcomes
- if learners goals and objectives are aligned to the course's goals and objectives
- which types of mistakes need special attention
- What type of feedback/assessment strategies do you use in your courses?
- What is their purpose?
- Do you mostly provide feedback to learners? or
- Do you collect feedback from learners?
- Are your assessment techniques mostly summative or formative?
- Write at least one example and discuss it with the others
From the GLOSSARY OF EDUCATION REFORM (also in PDF):
Formative assessment refers to a wide variety of methods that teachers use to conduct in-process evaluations of student comprehension, learning needs, and academic progress during a lesson, unit, or course.
Formative assessments help teachers identify concepts that students are struggling to understand, skills they are having difficulty acquiring, or learning standards they have not yet achieved so that adjustments can be made to lessons, instructional techniques, and academic support.
The general goal of formative assessment is to collect detailed information that can be used to improve instruction and student learning while it’s happening.
What makes an assessment “formative” is not the design of a test, technique, or self-evaluation, per se, but the way it is used—i.e., to inform in-process teaching and learning modifications.
In order to be useful during teaching, formative assessment has to be quick to administer and evaluate.
Formative assessment can be used as a teaching strategy and, as such, as an actual opportunity to learn.
In particular, it can be used to:
"Student's prior knowledge can help or hinder learning"
(Ambrose et al. (2010) "How learning works", principle 1)
- strategies to activate prior knowledge
- make examples taken from real life
- ask students questions designed to trigger recall; this can help them use prior knowledge to aid the integration and retention of new information.
- strategies to reveal accurate but insufficient prior knowledge
- administer a diagnostic questionnaire. In preparing a diagnostic questionnaire, be aware of the difference between "declarative knwoledge" (knowing what) and " procedural knowledge (knowing how and when to apply various procedures, methods, theories, etc.). A questionnaire would be sufficient to assess "declarative knowledge". Solving a small exercise may help test "procedural knowledge".
- administer a self-assessment questionnaire. Self-assessment may be a problem because students may not be able to accurately assess their abilities. Generally, people tend to overestimate their knowledge and skills. Accuracy improves when the response options are clear and tied to speciic concepts or behaviours. Example in Amborse et al. (2010) Appendix A.
- strategies to help learners recognise inappropriate prior knowledge. If the students are explicitely taught the conditions and contexts in which knowledge is applicable (and inapplicable) it can help them avoid applying prior knowledge inappropriately (example: Python "methods").
- make a list of kewords essential to the topic you are teaching and ask learners to classify terms introduced in a session at the end of the session. Example: pin to the classoroom wall cards with Python categories (modules, built-in functions, methods, data types, etc), write Python terms to cards and ask learners to pin cards with Python terms under the correct category while telling aloud why they are putting that term in that category.
- strategies to highlight inaccurate prior knowledge. Inaccurate prior knowledge can be corrected fairly easily if it consists of relatively isolated ideas or beliefs that are not embedded in larger conceptual models (for example, the belief that Pluto is a planet). Some kinds of inaccurate prior knowledge - called misconceptions - are remarkably resitant to correction. Misconceptions are models or thoeries that are deeply embedded in students' thinking (e.g. the notion that objects of different masses fall at different rates, "folk psychology" myths such as that blind people have more sensitive hearingthan sighted people, or that seasons depend on the distance of the Earth from the Sun). Misconceptions are difficult to refute for a number of reasons: 1) many of them have been reinforced over time and across multiple contexts; 2) they often include accurate - as well as inaccurate - elements, thus students mau not recognise their flaws; 3) in many cases, thwy may allow for successful explanation and prediction in a number of everyday circumstances.
- Administer MCQs with distractors (see below).
- You can use an anonymous diagnostic questionnaire as described below.
..... (see Small Teaching)
5) to highlight learners' weaknesses and difficulties and therefore to set the pace of the following teaching
Formative assessment can be done in many different ways:
- Asking questions to learners and getting responses orally;
- Asking them to describe the strategy they would adopt to solve a problem;
- Asking them to solve a problem in groups, or individually but in front of the class;
- Using brainstorming and discussions;
- Providing diagnostic questionnaires;
- Providing MCQs with distractors.
This types of questionnaires can be anonymous or not. We suggest that you use them anonymously.
This is an example of a diagnostic questionnaire for a session on PPI resources
Here you can see how the responses look like.
From the Socrative website:
Socrative is your classroom app for fun, effective classroom engagement. No matter where or how you teach, Socrative allows you to instantly connect with students as learning happens.
(You can) quickly assess students with prepared activities or on-the-fly questions to get immediate insight into student understanding. Then use auto-populated results to determine the best instructional approach to most effectively drive learning.
We quote from Center for Teaching at the Vanderbilt University.
A multiple choice question consists of a problem, known as the stem, and a list of suggested solutions, known as alternatives. The alternatives consist of one correct or best alternative, which is the answer, and incorrect or inferior alternatives, known as distractors.
Multiple choice test questions, also known as items, can be an effective and efficient way to assess learning outcomes. Multiple choice test items have several potential advantages:
Versatility: Multiple choice test items can be written to assess various levels of learning outcomes, from basic recall to application, analysis, and evaluation. Because students are choosing from a set of potential answers, however, there are obvious limits on what can be tested with multiple choice items. For example, they are not an effective way to test students’ ability to organize thoughts or articulate explanations or creative ideas.
Reliability: Reliability is defined as the degree to which a test consistently measures a learning outcome. Multiple choice test items are less susceptible to guessing than true/false questions, making them a more reliable means of assessment. The reliability is enhanced when the number of MC items focused on a single learning objective is increased. In addition, the objective scoring associated with multiple choice test items frees them from problems with scorer inconsistency that can plague scoring of essay questions.
Validity: Validity is the degree to which a test measures the learning outcomes it purports to measure. Because students can typically answer a multiple choice item much more quickly than an essay question, tests based on multiple choice items can typically focus on a relatively broad representation of course material, thus increasing the validity of the assessment.
The key to taking advantage of these strengths, however, is construction of good multiple choice items.
.........
Write three MCQs (in your field of teaching) revealing:
- a knowledge gap ("what")
- a weakness in a practical skill ("why, when, how")
- a misconception
[Here] (./docs/angelo_and_cross_50_cats.pdf) you can find the 50 CATS by Angelo and Cross. These are fifty assessment techinques grouped by purpose which can be used in teaching and in training. Some of them better apply to university semester courses, or high school classes, wheras some may turn out to be also useful in training courses/sessions. They are fully described and discussed in the book ""Classroom assessment techinques: A handbook for college teachers" (1993) by the same authors.
Techniques for Assessing Course-Related Knowledge & Skills
- Assessing Prior Knowledge, Recall, and Understanding
- Assessing Skill in analysis and Critical Thinking
- Assessing Skill in Synthesis and Creative Thinking
- Assessing Skill in Problem Solving
- Assessing Skill in Application and Performance
Techniques for Assessing Learner Attitudes, Values, and Self-Awareness
- Assessing Students’ Awareness of Their Attitudes and Values
- Assessing Students’ Self-Awareness as Learners
- Assessing Course-Related Learning and Study Skills, Strategies, and Behaviors
Techniques for Assessing Learner Reactions to Instruction
- Assessing Learner Reactions to Teachers and Teaching
- Assessing Learner Reactions to Class Activities, Assignments, and Materials
A description of the book written by the authors can be found here.
In the following we report the seven assumptions on which the CATs are based and five suggestions to use them fruitfully and effectively:
From Angelo and Cross:
Classroom Assessment is based on seven assumptions:
- The quality of student learning is directly, although not exclusively, related to the quality of teaching. Therefore, one of the most promising ways to improve learning is to improve teaching.
- To improve their effectiveness teachers need first to make their goals and objectives explicit and then to get specific, comprehensible feedback on the extent to which they are achieving those goals and objectives.
- To improve their learning, students need to receive appropnate and focused feedback early and often; they also need to learn how to assess their own learning.
- The type of assessment most likely to improve teaching and learning is that conducted by faculty to answer questions they themselves have formulated in response to issues or problems in their own teaching.
- Systematic inquiry and intellectual challenge are powerful sources of motivation, growth, and renewal for college teachers, and Classroom Assessment can provide such challenge.
- Classroom Assessment does not require specialized training; it can be carried out by dedicated teachers from all disciplines.
- By collaborating with colleagues and actively involving students in Classroom Assessment efforts, faculty (and students) enhance learning and personal satisfaction.
Five suggestions for a successful start:
- If a Classroom Assessment Techniques does not appeal to your intuition and professional judgement as a teacher, don't use it.
- Don't make Classroom Assessment into a self-inflicted chore or burden.
- Don't ask your students to use any Classroom Assessment Technique you haven't previously tried on yourself.
- Allow for more time than you think you will need to carry out and respond to the assessment.
- Make sure to "close the loop." Let students know what you learn from their feedback and how you and they can use that information to improve learning.
In active learning environments, learners are so involved in the learning process that they often loose consciousness about their accumulated knowledge and its level of operational value. Learning by doing catches them in the process, so they often forget about assessing it.
In good quality training, instructors make efforts to keep the interaction loop closed. As facilitators, they can give steering contributions to this buil-up.
At carefully chosen times, it may be useful to intervene and stimulate self assessment (see how to, under instant feedback below). Self assessment helps to regain such consciousness. Learners verify that they can do things that they could not do by themselves before, or at least that the need for external aid is lowering. This can be seen as a work-out process towards gaining independence or mastery in a subject matter. The conscious learner feels "empowered". It is up to the instructor to moderate and keep this empowerment within reasonable limits. Learners that are not feeling empowered often find it by comparing their experience with their peers. Dialogues between learners will naturally occur, but can also be stimulated by reflective exercises (such as in Software and Data Carpentry). The instructor will learn to adapt the level of intervention to each situation, keeping in mind that the learner is the focus of the learning process, and that the instructor/learner relationship is the cornerstone of learning as a stimulated human activity.
The recently empowered learner will naturally want to test new knowledge by using it in different contexts. Simple observation our experience as human beings is that if our knowledge in a subject is solid, it may also work in different settings or environments. This is good as a positive test, but the novice learner may fail in several aspects, such as overlooking assumptions, for example. In a closed loop interaction that is desired in a training environment, this can be seen as experimentation, subject to the exact same rules as any experimental work. The instructor can help to steer this process, stimulate the testing when he sees value in it, helping to highlight and avoid the pitfalls, validate the outcomes, etc. In this way he is directly stimulating critical thinking.
[a note regarding its usage in training quality assessment] Usage independence gained in active learning environments is a measure of training effectiveness. It can be usefully associated with each learning instance. In particular, a well designed training exercise with a well defined learning outcome, can be seen as a gauge for measuring effectiveness in a focused way. In a training instance (a course, a programme) if this technique is applied systematically, overall quantitative data about training effectively may emerge. It will need to be subject to validation via independent testing, confrontation with other assessment methodologies and ultimately subject to a critical appraisal of its value.
Quotes form "Peer Instruction, Getting students to think in class" by Erik Mazur , pdf available here
".. while listening is largely a passive activity, reading more easily engages the mind and it allows more time for the imagination to explore questions."
"the first exposure to new material comes from reading printed material before the lecture reading."
to be continued .... SEE QUESTIONNAIRE here
Feedback from learners is aimed at:
- assessing learner reactions to teachers and teaching thus providing context-specific feedback that can improve teaching within a particular course;
- assessing learner reactions to class activities, assignments, and materials thus giving instructors information that will help them improve their course materials and assignments;
- assessing learner reactions course organisational aspects, thus providing the organiser information that will help him or her to improve the course organisation.
Examples of feedback questionnaires:
- This is the type of questionnaire we developed for ELIXIR Italy courses:
Feedback questionnaire for the ELIXIR Italy course on "NGS for evolutionary biologists: from basic scripting to variant calling" (Rome, 23-27 November 2015)
We adapt it to each new course.
- This is the type of questionnaire used to assess the quality of bioinformatics courses organised and delivered by the Gulbenkian Training Programme in Bioinformatics at the Instituto Gulbenkian de Ciência:
Feedback questionnaire for the course on "Bioinformatics using Python for Biomedical Researchers" (Oeiras, PT July 11th – July 15th 2016)
In a training course, getting feedback at the end of the event is necessary, as the participants may (should) have developed encompassing, integrated views. However, it is vastly insufficient. Questioning participants frequently during training provision is rich with information and has very interesting effects. But when should this happen? And how can it be induced so that, as a drug, has as many positive effects and as low adverse effects as possible?
When? Ideally at natural breakpoints such as ending an exercise, shifting to a different subject and right after a wrap-up session.
How? It should be very focused and expedite in execution. The instructor should think of a clearly stated question that has a binary (Yes/no) or garaded (0-5) response. Ideally the isntructor should write the question and display it, ensuring that everybody knows what the question is at the same time and is aware of what the answering method is. Then, the isntructor collects the answers and records trhem in a tally.
This is Instant Feedback.
Several methods have been tested, some of the using technology (Clickers, Socrative, Learning Catalytics) or not (the fist of five method). The choice is made according to the availability of the means and how engaging the audience finds it.
by Allegra Via, Kristian Rother and Pedro Fernandes From: Academis by Kristian Rother.
How well was your explanation understood? How useful was an exercise? Is your class enthusiastic or frustrated? During a one-week programming course at IGC, Portugal, we asked after each training module:
"How much did you learn during the lesson? Please show one to five fingers. Raise your hands!"
Then we counted how often each number of fingers occurred. This way, the trainees felt more encouraged to provide critical feedback than if you would simply ask:
"did you understand it or not?"
Not necessarily do trainees utilize all five fingers. Our course participant Patricia commented:
"It is a good feedback and it is immediate. Although I feel sometimes a little bit shy to express my opinion."
The method needs seconds to execute and no preparation, which is a plus for the teacher. But trainees benefit as well. Our course participant Rita commented:
"I like it because it makes me think. It forces me to review and figure out whether I understood the subject or not and how much. It also shows you are interested."
This feedback is not an objective control of students' knowledge; it gives rather an indication of how confident they feel at a given point. You can try to suggest examples what a zero or five means, as in the linked article. The fist or five technique has also been recommended as a voting procedure to reach consensus in group discussions. You may test the method after giving a presentation to evaluate yourself.
The numbers we accumulated over more than a dozen sessions using one consistent method helped us to keep the course on track. The counting itself needed a bit of exercise to do it quickly. When we used the Fist or Five technique for the first time in 2012 with a group of 20 people, we asked for each number from zero to five separately This took a bit longer. For us, the main value of the Fist or Five technique is that it is easy to execute, it is quantitative, it is not stressful, it is immediate and can be repeated many times during a course. We hope you will see lots of 'high fives' in your next course!
Notice that the Carpentry teaching practices quoted in session 2 - Sticky notes; Minutes cards; One-up, one-down - are forms of Instant Feedback.
- For the LEARNER. Carefully implemented instant feedback obliges the learner to introspect, to answer himself first (do I really know this? How easy ist it for me to do this by myself?). With this, it becomes clear that he is made aware of his own progress and this is the smartes way to gain self-confidence. When questioned at the end-of the course questionnaire, he is much more able to make encompassing self assessments
- For the INSTRUCTOR. Multiple ways of checking if what has just been done was effective, as a result of the quality of the question. Useful assessment of the quality of the materials and the performance of the instructor. A way of identifying learners that may be dragging behind and may need more attention. A way of identifying learners that are getting ahead o the others in the group, and can become more active, receive harder assignments, help their colleagues, etc. A way to judge wheather the pace of training delivery is correctly chosen for the audience.
Long term assessments are rather difficult. First becaulse learners move and become more diffuicult to cantact with. Secondly because they forget, as all of us do. In this case they forget what worked for them as hidden details. Those may matter because what we are looking for, here, is the aseesment of impacts that endure.
Interviewing former course participants would be a possibility but it requires a lot of time. Sending them short questions by e-mail has worked with a yeald of about 30%, so unless you are tarining at least several hundreds of people it is likely that you end-up with a very small number of answers. Currently we see some home in the usage of social networks to collect valuable data.
Critical appraisals often happen is casual conversations. One should take notes to record them.
Example: Pedro Fernandes, Pooja Jain, Catarina Moita Training Experimental Biologists in Bioinformatics, Adv Bioinformatics. 2012;2012:672749. doi: 10.1155/2012/672749. Epub 2012 Jan 31. (Open Access)
There are several methods that can be used to evaluate training. One of the most referenced methods comes from Donald Kirpatrick (1924-2014).
-
Level 1: Reaction The degree to which participants find the training favorable, engaging and relevant to their jobs
-
Level 2: Learning The degree to which participants acquire the intended knowledge, skills, attitude, confidence and commitment based on their participation in the training
-
Level 3: Behavior The degree to which participants apply what they learned during training when they are back on the job
-
Level 4: Results The degree to which targeted outcomes occur as a result of the training and the support and accountability package
This model has been revised and expanded several times, see for example:
http://www.kirkpatrickpartners.com/OurPhilosophy/TheNewWorldKirkpatrickModel/tabid/303/Default.aspx
Applying the Kirkpatrick model and its variants is not easy. One needs to be very careful in checking pre-requisites, assumptions and options in the measurement methods.
The evaluation of training efficiency is a difficult subject. There is an obvious need to standardise to allow for the comparison of observations.
You may like to read an article about applying Kirkpatrick's methods. https://www.mindtools.com/pages/article/kirkpatrick.htm