Don’t drill but do practice


In education, wild misconceptions abound related to the concept of practice. A quick search of the web yields a host of people stating problems with practice.


The old “drill the skill” strategy of learning math facts was based solely on memory. Any strategy that is based solely on memory has a weak foundation. 

Source: Response: Ways To Teach Math Besides 'Drill The Skill’

What worries me, and should worry you, is what happens to children who are subjected to prepackaged curriculums. If I’d left JJ in a school that relied only on skill and drill worksheets to teach reading, I already saw what would happen – she wouldn’t be an enthusiastic reader, even a strong reader.

Source: The Case Against Skill and Drill Curriculum

"This [preparing for the tests] is all we did for the first half of the year," Marciniak said. "Our teachers focused on nothing else. And it's kind of hard sitting there as you're basically drilled and lectured on nothing but this."  This kind of "skill and drill" test preparation is increasingly widespread, especially in urban and rural schools where there are large numbers of students disadvantaged by poverty and where these students too often score poorly on tests. 

Source: Sustaining test score gains requires good teaching, not skill and drill



The above quotes express real frustration and concern for the outcome of students. Therefore, the objections to “Skill and drill” come from a context of caring. Yet each of the quotes shows how the authors got the idea of practice wrong.

Drill, Skill and Drill, Drill and Kill, Practice

In the previous quotes most people used the term “skill and drill.”But other derivatives, such as plain old “drill,”or its pejorative sister term, “drill and kill,"evoke notions of blind, back-breaking, soul-crushing activities that lead to poor academic outcomes and terrible emotional side effects.

It makes sense on the one hand. If we took an enthusiastic, joyful, curious kindergarten student and gave them worksheets for long hours, we'd have a situation resembling forced labor, not thoughtful teaching. Parents would not want their child's learning squashed by unrealistic work demands. Likewise, teachers do not want to ruin the youthful exuberance and spirit of inquiry.

Therein lies the problem. The terms Skill and Kill, Drill and Kill, Drill etc. have come to represent very inefficient, aversive practice methods. And because those methods make life unpleasant for the student, no one should use them. The logic train, however, has a flaw:

  1. Drill and Kill harms students.
  2. Drill and Kill is practice.
  3. Eliminate practice so we no longer harm students.

The previous deduction falls under the category of an “improper generalization.” An improper generalization contains an inaccurate statement rendering the conclusion false. Let’s review.

Falsehood #1. Drill and Kill harm students. 

Possibly true. What exactly people mean by the term varies. When someone paints a picture of pushing students to fatigue, harsh conditions, and forced compliance, sure, let’s avoid those situations at all costs.

Falsehood #2. Drill and Kill is practice.

Thoughtful, nurturing, and meaningful practice involves four main elements: (a) timed repetition of a behavior or skill; (b) having a quantified, time-based goal; (b) delivering performance feedback after each practice trial; and (d) engaging in a sufficient amount of daily practice (Binder, 1996; Ericsson, 2006).

Drill and Kill fails the test of good practice because it often has no time limit, lacks goals, has a stunning lack of feedback - and instead may include criticism. Additionally, Drill and Kill will ask students to practice beyond reason.

Falsehood #3. Eliminate practice so we no longer harm students.

Talk about throwing the baby out with the bath water! Yes, Drill and Kill has no place in a humane, compassionate, and positive educational environment. On the other hand, an educational environment that withholds and/or completely eliminates practice oppresses true mastery and fluency of content. Can you think of any human, anywhere, at any time that has become fluent with a skill without practicing?


Research shows again and again the critical, absolutely essential need for practice in skills ranging from music and surgery to reading and problem solving. Let’s do away with Drill and Kill and champion effective practice in all classrooms!

How has effective practice helped your learners become fluent? How have you merged practice into a package curriculum? We'd love to hear your stories!


Binder, C. (1996). Behavioral fluency: Evolution of a new paradigm. The Behavior Analyst, 19, 163-197. 

Ericsson, K. A. (2006). The Cambridge Handbook of Expertise and Expert Performance. Cambridge, UK: Cambridge University Press.

The record ceiling


In 1981, an important paper appeared in The Behavior Analyst titled: Current measurement in applied behavior analysis. The paper reviewed the practice of using discontinuous time-based measures to count and record behavior.

Interval recording represents a prime example of a discontinuous time-based method for behavioral observation. Let’s take the example of “momentary time sampling” or MTS. To use MTS, an observer looks to see if the behavior occurs during a specified moment or a pre-selected interval of time (Cooper, Heron & Heward, 2007). Thus the name, momentary time sample.

Below let’s practice MTS. At the end of each 6 second interval, see if the observation target occurs. The target to observe: the appearance of a blue circle in the middle of the screen. Moving in the screen or leaving the screen do not count as the target observation. Anytime during the 1 second moment (i.e., 6, 12, 18, 24, 30, 36, 42, 48, 54), if you see the ball resting in the middle of the screen give it a check mark for a count of one.

Feel free to download the pdf recording sheet below and fill in the form.

Recording form for MTS.pdf
File Size: 45 kb
File Type: pdf
Download File

Next, start the video clip. You will see a timer at the bottom of the clip. Every six seconds make a check if our friend the blue circle appears in the center of the screen.

Figure 1. A video of a blue circle appearing across time.

What count did you end up with? Did you note more of the blue circles appeared than the momentary time sampling allowed you to count? 

Some people choose to use interval recording methods like MTS to count behavior. Making the decision to measure only a sample of the full range of behavior creates an artificial ceiling. In Precision Teaching, we call all practices where we have an upper limit on what we can record a “record ceiling.”

In the previous exercise we can easily see the record ceiling. If you used the pdf form above, you can make out the number of times possible to count and record the target observation. The animated gif below shows we can only record a maximum of nine counts.

Figure 2. An animated gif showing the possible number of observations.

Therefore, the term record ceiling functionally defines its use. When making a record (of an observation), we encounter a ceiling (a limitation). Thus, on the Standard Celeration Chart, the record ceiling tells chart readers a ceiling occurred for measuring a specific data point.

If you go back to Figure 1 and count the frequency of the blue circle appearing, what do you come up with? The MTS and frequency count portray a stark difference in the frequency of recorded target observations. The MTS yielded a count of 2 appearances of the blue circle compared to 12 for the 59 second counting time.

On the Chart, we can now see the two frequencies and the destination of the record ceiling. The record ceiling draws our attention to something different about the observation.


Figure 3. A cross section of the Standard Celeration Chart showing two different observations of the same target.

By displaying the record ceiling, chart readers have additional information guiding their analysis. The data can only go as high as the record ceiling.

The record ceiling can come from other places aside from discontinuous time measures. Using percent correct, for example, also imposes a record ceiling. Let’s say we give a spelling test and have 10 items. Choosing percent correct imposes a ceiling of 100% (10 correct and zero incorrect). 

Record ceilings, like the time bar, give the chart reader more information. And any extra details help the performer and educational team fully analyze the data and facilitate decision making. 


Behold, the yearly chart


At Chartlytics, we have your data display needs covered! Not only can you find a daily chart, but also a weekly, and monthly chart. With our next release, Chartlytics will now also offer the yearly chart. And then we will have a celebration, a celeration celebration! The Chartlytics suite of charts allows you to examine data that occur as often as daily, up to less frequent data occurring yearly.

An example of the utility of the yearly chart occurs below. Every state must provide an annual report of specific data related to the education of students with disabilities to the United States Department of Education. The task of reporting special education data for Pennsylvania falls on the shoulders of PennData . PennData verifies and reports information about special education students who live in PA. Information collected includes counting students’ primary disability. As of 2014-15 the following disabilities appeared in the report: Autism, Deaf-Blindness, Hearing Impairment including Deafness, Intellectual Disability (MR), Multiple Disabilities, Orthopedic Impairment, Emotional Disturbance, Specific Learning Disability, Speech or Language Impairment, Traumatic Brain Injury, Visual Impairment including Blindness, and Other Health Impairment 

Some of the previous disability categories have received a great deal of attention, namely autism. Even driving down a road you may have encountered a billboard raising awareness as to the prevalence of autism.


Figure 1. A billboard showing prevalence for individuals with autism (from

For people of a certain age, the number certainly seems incredible. I grew up a child of the 70s and the prevalence estimates came for autism came to about 1 per 20,000 (1 student in 20,000 had a diagnosis of autism). Today one can find prevalence figures at 1 in 88 and even 1 in 68. From my childhood to today, the proportion of children found to have autism has changed dramatically! 

Why the prevalence has grown raises debate. Discussing why they have changed would require another blogpost (or three). But the data undeniably show a large increase in students receiving the diagnosis of autism. Now back to PennData.

The chart below shows the yearly counts of three different categories of students in the state of Pennsylvania: The total number of all reported disabilities, students with intellectual disabilities (previously reported as Mental Retardation), and students with Autism (previous category also included pervasive developmental disorder). As shown by the yearly chart, the data reflect the reality of changes for the disability category autism.


Figure 2. A yearly Standard Celeration Chart showing PennData information

The celeration value covering 1995 to 2011 has accelerated by x2.5. A x2.5 celeration represents 150% increase across time! The bounce value of x1.3 speaks to variability. A x1.3 depicts stability. Therefore, we have two values on the yearly chart. Celeration and bounce illustrate a rapidly growing, stable acceleration of students diagnosed with autism.

Contrasting autism, students with intellectual disabilities has decelerated by ÷1.15. A ÷1.15 tells us the measured quantity decelerated 1.15 times every five years. A ÷1.15 comes to a 13% reduction. The bounce interestingly comes to x1.3, a remarkably stable pattern across time.

The last category shows the total students with disabilities for Pennsylvania. The celeration value displays a growth rate of x1.15. The bounce again comes to x1.3 and suggests stability across time. Variability would mean an influx of people or a dramatic lessening, for example by students leaving the state. But we see little variability.

Another beautiful feature of the yearly chart comes when we compare the three different data sets. Notice each data set starts off at different places on the chart. But the yearly Standard Celeration Chart (SCC) allows for a fair comparison of the statistical magnitude of three different data sets due to the proportional construction of the chart. Look at the linear graph below. Does it depict the same type of changes as the Yearly SCC?



Figure 3. A linear graph showing PennData information

A curious relation emerges when we view the yearly SCC (Figure 2). Students with intellectual disabilities started off much higher total number (approximately 28,000) than students with autism (about a 1,000) in 1994. A deceleration of ÷1.15, or a 13% reduction across time) for students with intellectual disabilities means in 2011 almost 10,000 fewer students falling under the specific disability category. The celeration value of x2.5 for autism reflects a 150% increase for the disability category across time. In 2011 about 19,000 more students had the disability of autism.

From the data, we cannot conclude the 10,000 fewer students with intellectual disabilities were categorized in the 19,000 students with autism. The PennData represent descriptive counts, not causal data. Still, the chart allows us to see relations between two sets of data and explore questions more vigorously. Why the deceleration in students with intellectual disabilities? Do the other disability categories show growth or decay across time? For example, one might also plausibly argue that students with intellectual disabilities now show up as students with multiple disabilities. 

Many discoveries await you when you chart on the Standard Celeration Chart. And now that you have access to another chart, fire up those yearly data!


Subscribe to Email Updates