Active learning, active pushback, and what we should take away from a new study of student perceptions
“Compared with students in traditional lectures,
students in active classes perceived that they learned less, while in reality
they learned more.” — Louis Deslauriers, Logan S. McCarty, Kelly Miller,
Kristina Callaghan, and Greg Kestina, Measuring
actual learning versus feeling of learning in response to being actively
engaged in the classroom.
That’s the conclusion of a brand new and
already-attention-getting article just published in the Proceedings of the
National Academy of Sciences, and a close reading suggests that the attention
is well deserved.
It reports a scrupulously conducted study of two
different approaches to teaching physics concepts: a traditional, teacher-focused
lecture, and a student-focused active learning setup involving group work on
problems supported by instructor feedback and guidance.
To most of us with an interest in such things, the idea
that active learning = good is not exactly new. So what is world-rocking about
what Deslauriers and colleagues did?
One is their use of a beyond-gold-standard true
experimental design. Individual students (not whole sections, which is usually
as good as it gets in this kind of research) were randomly assigned to control
and experimental groups corresponding to different teaching approaches. One
class was conducted in each method, then the groups were flipped around so that
each student experienced each method. Same thing for the instructors — each
used each method once, cutting out the issue of whether students were
responding to the instructional method or to the instructors themselves.
The procedures used in the classes were set up carefully
so that neither method — lecture or active learning — had an unfair advantage.
Content was the same, materials and objectives were the same, problem sets were
the same. The only difference was that in the lecture condition, the teacher,
well, lectured, demonstrating how to solve problems while students followed
along on their own worksheets. In the active learning condition, students
tackled the problems in small groups. The instructors circulated as one
typically does during this kind of activity, then at the end, they showed the
class the correct process for the solution.
Researchers gathered data from students on two separate
dimensions, first, mastery of the content (the TOL, or test of learning), and
second, subjective feelings about the lesson (the FOL, or feeling of learning).
The TOL, notably, was prepared independently of the instructors, as a safeguard
against their teaching too closely to the test.
Here’s what the researchers found.
Students in the active learning condition scored higher
on the TOL, and judging from the data distributions laid out in the article, it
wasn’t even close. And, you guessed it — they scored lower on
the FOL.
Those disparities in FOL scores are stunning, especially
on two questions in particular: My instructor was effective at teaching,
and I wish all of my physics classes were taught this way.
If, as an instructor who works hard to create great
pedagogy, that doesn’t make your heart sink, well — you’re a lot tougher than I
am.
As the teaching expert and inclusivity advocate Viji Sathy pointed out in this tweet, this pattern of
findings means that all those student opinion surveys, evaluations and the like
could well be disadvantaging exactly those individuals who are doing more to
promote student learning.
That fact in and of itself would justify circulating
this study like crazy. And while I’ve cautioned against using one study as a blunt instrument to push one’s own pedagogical agenda, this time, a
little bludgeoning might be in order.
Why? There are the pristine methods, analyses, and
interpretations of course, which all read like a master class in how to conduct
scholarship of teaching and learning. There’s also the way that the findings
neatly funnel into a conclusion that many of us have long suspected, that
students — due to being novices in a field, discomfort with the messiness and
effort of active learning exercises, or lack of insight about how learning
works — are poor judges of effective teaching.
There is also the way that the findings fit with themes
from so many other lines of research, amplifying and deepening those without
repeating them. As the authors note, their results follow on a long line of
well-established cognitive principles including desirable difficulty and the development of expertise.
They also remind me of some of my own research from long
ago, when my colleague Laurie Dickson and I were looking at the impacts of assigning practice quizzes in Introduction to
Psychology.
On an end of semester survey, many students said they
thought the quizzes helped raise their exam performance, an impression borne out
by our comparisons across sections that were randomly assigned to do or not do
these exercises as part of their graded work. But we were always puzzled by the
fact that not all of those same students said they would voluntarily do this
sort of quizzing in the future.
It’s strange to think that a student might see value in
a learning exercise but still be unwilling to do it on their own. But it makes
some sense in light of the idea that active learning can be uncomfortable, and
effortful, and seemingly not as worthwhile as sitting and watching an expert
present the material.
What I also like about this article is that it suggests
solutions, not as an afterthought but with some very clear recommendations for
how to improve teaching and learning in higher education.
One is to look at student impressions very cautiously,
or not at all, when judging teaching. This recommendation comes at a time when
reliance on student evaluations is a problem that is currently reaching a
system-blinking-red point. Disadvantaging those who use active learning is just
one of many reasons why.
Note that I said reliance, and not over-reliance.
The longer I work in academia, the more convinced I am that attempting to bring
in “balance” or “context” as a way to appropriately de-emphasize student
evaluations just doesn’t work. Those numbers and comments are a bell you can’t
unring when it comes to forming impressions of others’ work, and hard as it is,
as a profession we need to commit to developing new metrics that reflect
teaching quality, particularly those that privilege evidence-based course
design features.
The article also advocates for something that is dear to
my heart: raising student awareness about why we teach the way we do, and how they can
take the same principles and run with them as they take charge of their own
learning.
It’s easy to overlook, but the authors include an
intriguing final component documenting the positive effects of a short
presentation on the way in which active learning works and why FOL is a bad
indicator of actual learning. Struggle can be good, and teaching students this
fact can be transformative, an idea that also reminds me of the work on
belongingness and normalizing struggle in the early college career.
Of course there are caveats to how we apply and extend
this work to our own teaching practices. I hope that the attention paid to it
doesn’t trigger a wave of teacher-knows-best, adversarial pedagogy the likes of
which we’ve seen aired in the classroom laptop wars over the last few years.
If that were to happen, it could set back the current
trend towards empathic,
truly student-focused pedagogy, and that would be
a tragedy. And a needless one at that, because as I see it, empathic teaching
is perfectly compatible with the idea that our students aren’t always the best
judges of their own learning processes — not because they are students, but
because they are novices, and also because they are human beings.
I’ll be emailing this article to my colleagues and I
suspect many of you will be as well. We’ll also be seeing — I hope — some
spirited discussion about what the findings do and don’t mean for college
teaching. This time, let’s see to it that action also follows.


