It just ‘clicks’: Responseware use can improve student test performance

I first encountered clickers as a Master’s student back in 2003, in a physiology module that was delivered through a combination of lectures and lab sessions. There were probably only about 40 of us in the class, but when you are sitting in a lecture theatre it is easy to feel like you’re one of a nameless, faceless multitude of students, lacking much of a personal relationship with even the most charismatic of lecturers. Our lecturer was, indeed, very personable, delivering content in a relaxed and conversational way, sprinkling it with personal anecdotes and references to current events and pop culture in order to make the topic seem more relevant, and using questions (both asked and answered) to establish a rapport with his students and encourage their participation–all excellent teaching practice. All the same, there were moments when my eyes would drift towards the clock and I’d start counting down the minutes until I was free.

I’m not going to say that the clickers completely ameliorated that problem, but they did spice things up in a way that not only mitigated against potential boredom but also helped me (and, I assume, at least some of the other students) engage more fully with the lecture material–and, in so doing, begin to prepare for the exams. When the clickers were first introduced, I thought they seemed a little gimmicky, but I quickly noticed the sorts of benefits that have now been documented extensively in pedagogical literature–I was able to identify which topics I understood and which ones I needed to read up on; I was able to ask the lecturer for clarification on the spot; I could seek explanations from my peers or work with them to select a correct answer; I got a sense of how to apply the concepts we were learning and, therefore, how to acquire the sort of nuanced and in-depth understanding needed to use the course content in my research–and, of course, to pass assessments. I am not sure I fully appreciated all this at the time, but I do see it now, and I have a new-found respect for the lecturer who took the time to learn about then-new technology and take a gamble on experimenting with it in the classroom.

Clickers have since become more mainstream and widespread, to the point that you don’t even need a physical clicker anymore; here at the University of Exeter, for example, TurningPoint software allows lecturers to take advantage of the fact that students come to class equipped with smartphones, tablets, and laptops, which can now be used as de facto clickers–saving the hassle of logging in the traditional remote controlled devices, keeping the batteries recharged, and worrying about having enough for all students. We have now moved on from asking “Are clickers useful?” to wondering about more complex issues such as “How do we best employ clickers to maximise their usefulness to the widest range of students?”

This type of question was recently explored in a study run by academics from the University of Nebraska. Published in the journal Computers & Education, ‘The positive effect of in-class clicker questions on later exams depends on initial student performance level but not question format‘ examines, as its title suggests, two factors that the authors thought might influence the impact of clickers: the format of questions the clickers are being used to answer, and the level of achievement (in terms of class performance) of the students who use the devices. These are just the tip of the iceberg in terms of which characteristics might affect clicker utility; likewise, subsequent test performance is only one way to judge efficacy–all issues that the authors acknowledge while pointing out (quite rightly) that you need to start somewhere.

Fig-1-1.jpg
Experimental design

By assigning students unique clickers that each individual used consistently throughout their (introductory biology) course, the researchers could look for patterns in, and associations between, in-class answers before and after peer discussion of the focal clicker questions, as well as the relationship between clicker activity and test results. Further, they could pose both clicker questions and associated test questions in different ways in order to see whether question format impacted students’ ability to answer correctly. Two question formats (on both clicker questions and exams) were explored: standard multiple choice (MC in the image above), where students pick the single best answer out of many possible, and multiple-true-false (MTF), where students assess multiple options and decide whether each is true or false. This variation resulted in four potential clicker–>pathways for students: 1) MC clicker question and MC test question; 2) MC clicker question and MTF exam question; 3) MTF clicker question and MTF exam question; and 4) MTF clicker question and MC exam question.

 

The study found that there was no influence on question format on exam performance: ‘For both MC and MTF exam questions, there was no difference in performance based on whether students answered the corresponding clicker questions in the MC or MTF format’. This was not exactly expected, since MTF questions often require more contemplation and can therefore result in a deeper understanding than one might anticipate from a simple MC format. However, regardless of whether the question format was MC or MTF, there was some evidence that clicker usage did have a beneficial effect on assessment outcome: Students that attended class and participated in clicker activities were over 1.5 times more likely to answer the corresponding exam question correctly. Further analyses painted a somewhat more complicated picture; there did seem to be knowledge gains between the first and second clicker questions and between the first question and the exam, but these were not consistent across all types of questions or all types of students. Particularly interesting was the finding that the top students seemed to experience the most significant and long-term benefits, while under-performing students did not seem to get a sustained boost from participating in clicker activities.

These patterns beg the questions of exactly how and why clicker use could influence learning and retention of knowledge. Each clicker question was posed twice; students’ first answers reflect their pre-existing understanding, while their second answers reflect a potentially updated understanding incorporating knowledge they may have gained during peer discussions during class. The benefits of peer learning have long been known, and so it is perhaps unsurprising that the current study found evidence that sharing of knowledge amongst students could improve performance on the second round of clicker questioning; unfortunately, however, this did not seem to consistently translate to better performance on the test.

So what are the take-home messages of this work? First of all, it seems clear that the benefit of the clickers stems not from exposing students to particular formats of questions, but simply in encouraging them to actively participate in class by pondering the course material and talking with peers. Second, the study shows that we would benefit from a clearer understanding of the dynamics of peer discussions: What is happening during these chats that drives differential benefits for lower- and higher-performing students? Third, though exam performance can be improved through clicker activity, this is not uniform across all students and the mechanism behind this pattern is not fully understood. Further research might seek to explore whether students are simply memorizing content related to clicker questions, using those questions to flag important concepts for further attention, or perhaps just benefiting from being exposed to the same ideas on multiple occasions.

Although the study does raise new questions and highlight a need for future work that can help lecturers refine their use of clickers, it also provides additional evidence that using these devices is, in general, a good way of adding variety to lectures and improving uptake of knowledge. Hopefully this will be encouraging to those lecturers who already take the time to plan clicker-based activities for classes, as well as inspiring a few new adopters to give this technology a try. That said, the research also emphasizes a fundamental concept in teaching and learning: Not all students are the same; they have different needs and will respond to activities in a different way, and so it is important to assess responses to different exercises in order to determine whether anyone is being left out or left behind. As the authors point out, clicker questioning is just one of many potential teaching techniques; it is more important for lecturers to understand and engage with such essential pedagogical theories than to employ the latest fad in turning those theories into real-world practice.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close