Practicals Make Perfect

The physical environment provides continuous and usually unambiguous feedback to the learner who is trying to learn physical operations, but does not respond to the learning attempts for cognitive operations.

Engelmann, Siegfried; Carnine, Douglas. Theory of Instruction: Principles and Applications (Kindle Locations 1319-1320). NIFDI Press. Kindle Edition.

Into The Dustbin Of Pedagogy?

Helen Rogerson asks: “Should we bin [science] practicals?” and then answers emphatically: “No. We should get better at them.”

I wholeheartedly concur with her last statement, but must confess that I find it hard to articulate why I feel practical science is such a vital component of science education.

The research base in favour of practical science is not as clear cut as one would wish, as Helen points out in her blog.

New-kid-on-the-blog Adam Boxer has even written a series of blog posts with the provocative title “Teaching Practical Skills: Are We Wasting Our Time?“. He writes:

[T]his then raises the question of “what about the kids who are never going to see a pipette dropper again once they’ve left school?” I don’t have a great answer to that. Even though all knowledge is valuable, it comes with an opportunity cost. The time I spend inculcating knowledge of pipette droppers is time I am not spending consolidating knowledge of the conservation of matter or evolution or any other “Big Idea.” [ . . . ] But if you’ve thought about those things, and you and your department conclude that we do need to teach students how to use a balance or clamp stand or Bunsen burner, then there is no other way to do it – bring out the practical! Not because anyone told you to, but because it is the right thing for your students.

Broadly positive, yes. But am I alone in wishing for a firmer foundation on which to base the plaintive mewling of every single science department in the country, as they argue for a major (or growing) share of ever-shrinking resources?


The Wrong Rabbit Hole

Image result for rabbit hole

I think a more substantive case can indeed be made, but it may depend on the recognition that we, as a community of science teachers and education professionals, have gone down the wrong rabbit hole.

By that, I think that we have all drunk too deep of the “formal investigation” well, especially at KS3 and earlier. All too often, the hands-on practical aspect plays second- or even third- or fourth-fiddle to the abstract formalism of manipulating variables and the vacuous “evaluation” of data sets too small for sound statistical processing.


So, Which Is The Right Rabbit Hole?

The key to doing science practicals “better” is, I think, to see them as opportunities for students to get clear and unambiguous feedback about cognitive operations from the physical environment.

To build adequate communications, we design operations or routines that do what the physical operations do. The test of a routine’s adequacy is this: Can any observed outcome be totally explained in terms of the overt behaviours the learner produces? If the answer is “Yes,” the cognitive routine is designed so that adequate feedback is possible. To design the routine in this way, however, we must convert thinking into doing.

Engelmann, Siegfried; Carnine, Douglas. Theory of Instruction: Principles and Applications (Kindle Locations 1349-1352). NIFDI Press. Kindle Edition.


Angle of Incidence = Angle of Reflection: Take One

It’s a deceptively simple piece of science knowledge, isn’t it? Surely it’s more or less self-evident to most people…

How would you teach this? Many teachers (including me) would default to the following sequence as if on autopilot:

  1. Challenge students to identify the angle of incidence as the independent variable and the angle of reflection as the dependent variable.
  2. Explain what the “normal line” is and how all angles must be measured with reference to it.
  3. Get out the rayboxes and protractors. Students carry out the practical and record their results in a table.
  4. Students draw a graph of their results.
  5. All agree that the straight line graph produced provides definitive evidence that the angle of reflection always equals the angle of incidence, within the limits of experimental error.


I’m sure that practising science teachers will agree that Stage 5 is hopelessly optimistic at both KS3 and KS4 (and even at KS5, I’m sorry to say!). There will be groups who (a) cannot read a protractor; (b) have used the normal line as a reference for measuring one angle but the surface of the mirror as a reference for the other; and (c) every possible variation of the above.

The point, however, is that this procedure has not allowed clear and unambiguous feedback on a cognitive operation ( i = r) from the physical environment. In fact, in our attempt to be rigorous using the “formal-investigation-paradigm” we have diluted the feedback from the physical environment. I think that some of our current practice dilutes real-world feedback down to homeopathic levels.

Sadly, I believe that some students will be more rather than less confused after carrying out this practical.


Angle of Incidence = Angle of Reflection: Take Two

How might Engelmann handle this?

He suggests placing a small mirror on the wall and drawing a chalk circle on the ground as shown:


Theory of Instruction (Kindle Location 8686)

Initially, the mirror is covered. The challenge is to figure out where to stand in order to see the reflection of an object.


Note that the verification comes after the learner has carried out the steps. This point is important. The verification is a contingency, so that the verification functions in the same way that a successful outcome functions when the learner is engaged in a physical operation, such as throwing a ball at a target. Unless the routine places emphasis on the steps that lead to the verification, the routine will be weak. [ . . . ]

If the routine is designed so the learner must take certain steps and figure out the answer before receiving verification of the answer, the routine works like a physical operation. The outcome depends on the successful performance of certain steps.

Engelmann, Siegfried; Carnine, Douglas. Theory of Instruction: Principles and Applications (Kindle Locations 8699-8709). NIFDI Press. Kindle Edition.

Do I want to abandon all science investigations? Of course not: they have their place, especially for older students at GCSE and A-level.

But I would suggest that designing practical activities in such a way that more of them use the physical environment to provide clear and unambiguous feedback on cognitive ideas is a useful maxim for science teachers.

Of course, it is easier to say than to do. But it is something I intend to work on. I hope that some of my science teaching colleagues might be persuaded to do likewise.

A ten-million year program in which your planet Earth and its people formed the matrix of an organic computer. I gather that the mice did arrange for you humans to conduct some primitively staged experiments on them just to check how much you’d really learned, to give you the odd prod in the right direction, you know the sort of thing: suddenly running down the maze the wrong way; eating the wrong bit of cheese; or suddenly dropping dead of myxomatosis.

Douglas Adams, The Hitch-Hiker’s Guide To The Galaxy, Fit the Fourth



Filed under Direct Instruction, Education, Science, Siegfried Engelmann

9 responses to “Practicals Make Perfect

  1. Great piece. Happy to be the “new kid on the block” 😉

    A question so I can check I understood you correctly. Let’s say you used some other method to Englemann’s to provide explicit instruction as to i=r, you used a ball, or a laser or some other kind of demo. Then had the students work at some examples (and non-examples). Developed the thinking that way and reached a level approximating mastery of i=r. Then you allowed the students to perform a practical with the light boxes etc with the aim of confirming what they already know. Would that fit your criteria? They are receiving clear and unambiguous feedback from the environment.

    Also, what about practicals where the results are less neat, like a V=IR experiment where you are holding R roughly constant with a rheostat but in reality your results are fluctuating with +-10% or so?

    • Thanks for the comment, Adam — I’m only calling you ‘new kid’ BTW because I’m jealous of the quality of your blog 🙂

      The method you suggest for i=r seems perfectly sound to me and would lead to some good learning. However, Engelmann (as I understand him) suggests that the best learning happens when “thinking is translated into doing” — when the physical environment can be used to provide clear and unambiguous feedback on cognitive operations. In other words, they make a prediction based on their current knowledge and get near-instant feedback. (Your lead iodide example is another good one, I think./

      I believe our current practice in science practicals often does not provide that clear, unambiguous and near-instant feedback. Rather, it shrouds the feedback in cognitive operations that many students find abstruse.

      Helen Rogerson made the point on Twitter than Engelmann’s i=r method is similar to the best science investigation practice in primary schools, as she understands it. And it’s true that as students get older and the level of work increases, it may become more difficult to use the physical environment to provide the desired “clear and unambiguous feedback”.

      But I think the point I made about science practicals not being “practical” enough in current practice is still valid.

      How far the project can go is unclear. As you point out, some worthwhile practical activities are inherently “messy” and will not fit the model. And I think that students SHOULD carry out full investigations during their time in school. But I do believe that making every practical into a full-blown investigation or even a mini-investigation is a mistake because it takes too much of the practical out of science practicals.

  2. Pingback: So, are we wasting our time? – A Chemical Orthodoxy

  3. Thanks for this interesting post. I think it would help if we science teachers made a clearer distinction between demonstrating and investigating. Too often ‘investigations’ are in fact demonstrations – constructed by the desire to show students what they are ‘supposed’ to see. There is a place for demonstrations, but investigations should allow students to investigate – receive empirical feedback from the environment. That’s also a key reason to do practical science – the motto of the Royal Society is “Nullius in Verna”, take nobody’s word for it. Testing ideas is at the heart of what science is, and is perhaps (more than using a dropper pipette) something all students can apply in whatever walk of life they choose.

    • I think we agree in principle on the value of practical work and investigations in school science — a scientific theory lives or dies by “empirical feedback”, after all. However, I wo

      • …Worry that too often we are asking students to run before they can walk: the focus is on the formal investigation process rather than on the actual empirical feedback of “This happens. That didn’t happen.” I think this applies to younger secondary students especially. I think a growing number of students lack the mechanical “smarts” and physical intuition of previous generations simply because they are more likely to play on a screen rather than in a sandpit. I also think there’s a case that students need to see what they are “supposed” to see in order to develop trust in the scientific method.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s