We're Adults. We Surely Don't Need Supervision...

5/22/2017
 

The Values of BACB Supervision

 supervision2.jpg

Supervision of future Behavior Analysts is a critical task for both supervisor and supervisee. With many excellent Behavior Analysis Training Programs located around the world, students unquestionably learn the principles of behavior and theory of the science. Application of those principles, measurement of behavior, and watching contingencies change behavior is crucial to the development of future behavior analysts. Without the careful guidance of the supervisor, the supervisee may miss critical features of defining behavior, measuring behavior, and implementing intervention. The supervision process is similar to working with clients in the field, it requires careful and systematic approaches to ensure success. Supervision is more than passing a certification exam, it’s the chance to apply the science with monitored support.

 

At Chartlytics, supervision services were constructed with four main goals:freestock_58144741.jpg

Source

We seek to develop future BCBAs skills in defining behavior.

Using precision terminology, supervisee’s will precisely define target behavior for change using movement cycles and pinpoint +.

Measure behavior to make the best decision for the clients receiving services.

A target behavior can be graphed in multiple ways. A behavior analyst should be thoroughly aware of how to graph behavior and how to analyze the data. Using the Standard Celeration Chart (SCC), supervisees can begin to track their own behavior and then move to their clients. The added support of quantification and standards allows for a deep level of analysis and the ability of the supervisee to make decisions with confidence.

Provide seminal literature in the field to allow supervisees to become familiar with where the field started, where it is now, and where it could be going.

For example, applying the dimensions of ABA to their own work (Baer, Wolfe, & Risley, 1968; 1987).

Create a behavior analyst that has acquired the skills to provide successful services in their chose place of employment.

Completion of the goals and individualized development of the supervisee allows for the successful completion of supervision. It also allows for the creation of a professional in the field that can address the challenges of working with varying topographies of behavior and helping those that need behavior analytic services. Behavior is a complex subject matter that requires the application of principles to demonstrate functional relations. Establishing goals that ensure success of the supervisee by providing flexibility during supervision is paramount to the development of those entering the field.

Chartlytics as an organization strives for the measurement to guide decisions. The goal of our supervision program is to pass those values on to our supervisees while allowing them to treat behaviors that interest them.

If you're interested in our supervision services at Chartlytics, you can apply today!

APPLY FOR FREE

In case you missed it...

 


Meet the Author:

sal-profile-2.pngSal Ruiz, BCBA, Doctoral Candidate, The Pennsylvania State University.
Sal Ruiz is a third year PhD. Candidate in the College of Special 
Education at The Pennsylvania State University. Research interests include Functional Analysis, The Standard Celeration Chart, and Challenging Behavior. Prior to beginning his doctoral studies, Sal was a Behavior Specialist in a public school in Northern New Jersey. He obtained his BCBA credential in August 2013 and is currently supervising those pursuing certification. Visit Sal on LinkedIn

 

So You Want to Get BACB Certified...

5/15/2017
 

           

Behavior analytic literature provides some excellent guidelines for implementing the principles of behavior analysis with clients. For example, the dimensions of behavior analysis have demonstrated the importance of describing behavior in observable terms, demonstrating functional relations, providing replicable procedures, intervening to improve the lives of the organisms we work with, making a durable behavior across settings, and ensuring the procedures work (Baer, Wolfe, & Risley, 1968; Baer, Wolfe, & Risley, 1987). Following these guidelines is critical to success, whether you work with typically developing people, those with a diagnosis, or non-human organisms. To do this well in the early stages of your career requires structured support from behavior analysts with a history of successfully building behavioral repertoires.

Features of the Standard Celeration Chart

 Features of  SCC




Meaningful supervision experiences utilize the same principles and guidelines, with a level of fidelity and attention to detail, that parallels the services your clients receive. Throughout the Independent Field Experience at Chartlytics, we implement deliberate strategies to ensure supervisee success. We aim to shape up a repertoire of best practice and produce behavior analysts who perform well beyond minimum competencies.

Here’s how we do it:

  • First, we help our supervisees gain command of the literature through careful examination of seminal manuscripts in the field.
  • Second, we provide practice opportunities to define and measure behavior. We place a unique emphasis on defining behaviors precisely and mastering best measurement practices for various scenarios.
  • Third, we guide supervisees through single case research designs and graphical displays while modeling analyses and decision-making processes, ultimately building supervisees’ analytic skills with graphed outcome data.
  • Fourth, we identify and strengthen the behaviors needed to operate independent of the supervisor.

The process provides a high level of guidance through supervision and gradually fades based on the performance of the supervisee. Precision Teaching emphasizes data analysis to determine when change is appropriate; the same model is applied to our Independent Field Experience. The supervisor initially provides a high level of involvement and the amount of guidance changes as needed, based on progress data. Further, we offer individual sessions and group sessions based specifically on which option best fills the needs of our individual supervisees. Individual sessions include content such as focusing on client needs, developing data collection tools, and measurement tactics. Depending on client needs or supervisee interest, we may also assign literature to contribute to a well-rounded understanding of certain concepts or procedures. A group session may include behavior analytic discussion led by the supervisees, or this time can be used to help connect future behavior analysts for the development of a professional network.

BCBA superbvison.png

The field of behavior analysis has built a successful record of creating behavior change by individualizing interventions and measurement. Chartlytics Independent Field Experience provides the same level of individualization and measurement. We seek to create behavior analysts that have a strong foundation in measurement. Through guided support and careful analysis of data, great change can occur. Supervisees will one day become BCBAs who will impact the lives of many individuals, so we take supervision seriously.

Also check out:


Meet the Author:

sal-profile-2.pngSal Ruiz, BCBA, Doctoral Candidate, The Pennsylvania State University.
Sal Ruiz is a third year PhD. Candidate in the College of Special 
Education at The Pennsylvania State University. Research interests include Functional Analysis, The Standard Celeration Chart, and Challenging Behavior. Prior to beginning his doctoral studies, Sal was a Behavior Specialist in a public school in Northern New Jersey. He obtained his BCBA credential in August 2013 and is currently supervising those pursuing certification. Visit Sal on LinkedIn

 


 

Without aims… you’re aimless!

7/28/2016
 

I’ve spent a lot of time lately with folks who are trying their hand at Precision Teaching for the first time.
Through this process, I’ve learned a lot about common “rookie” mistakes and have been reminded of those I made while getting started. It is fascinating to watch people transform into Precision Teachers as they embrace precise measurement and strive for effective teaching practices.
Here’s a rookie move that I found surprising, and surprisingly common: People go through all the work to pinpoint a behavior, create teaching materials, measure and chart behavior on the Standard Celeration Chart… but their charts have no aims!

The utility of aims

I don’t know who said it first, but in these moments I’m reminded of the saying, “without aims, you’re aimless.” It’s true! Without a goal in mind, how do you know how far you have to go? How do you determine when it’s time to move on to another target?

Here’s what I mean.

This is a chart with an aim star to represent the frequency aims for both acceleration and deceleration pinpoints.

The aim stars tell me that a performance of 120-80 correct words per minute and 0-1 incorrect words per minute is most likely to indicate fluent sight reading behavior. They also provide a visual anchor representing the time left to practice this pinpoint+ (in this case, I have a total of five weeks with Johnny before he goes back to school).

After a few days of practice, my celeration lines can be projected out. Both my acceleration and deceleration projections reach their respective aim stars. It looks like, if Johnny continues to practice, he will hit aim (and be fluent) right on time!

Here’s the exact same chart without aim stars.

Looking at this chart, I can see that correct words are increasing and incorrect words are decreasing. I can look at celeration values to determine by how much these performances are changing over time, but I cannot determine how long it might take to achieve optimal frequency. For this reason, I can’t make any recommendations about whether to stay the course or make a change to insure adequate progress in the time I have with Johnny.

What’s in an aim?

Now that we know a bit more about the utility of aims, here are a few more things you should know about them:

  • Aim = “Performance Standard.” These terms are interchangeable in Precision Teaching.
  • An aim is typically a range of frequencies within which fluent behavior is observed.
  • The higher frequency often comes first (ie, 100-80) to encourage teachers and students to shoot for the higher frequencies.
  • Reaching these aims tends to be predictive of the outcomes associated with fluency (Maintenance, Endurance, Stability, Application, and Generativity, or MESAG).
  • Many of the aims used in Precision Teaching have been empirically validated through years of research and classroom practice.

Finding and creating aims

I think a lot of people know these things already, and yet...aimless charts persist! Why is this? Well, it turns out, aims are kind of tricky. People are unsure whether or not they exist for certain pinpoints. If they do exist and can be found, it’s not very clear where they come from and whether they are appropriate for certain learners. Often times, practitioners and organizations create new pinpoints for their learners but have no formal process for creating associated aims.

Let’s see if this quick guide helps…

Step 1: See if the aims are already out there
The Precision Teaching Book has a long list of Performance Standards from Precision Teaching literature in the Appendix. Dr. Kubina shared it with me to share with you -- check it out here.

If you buy materials created by Precision Teachers (ie, Morningside Press, Haughton Learning Center, The Maloney Method, Essential for Living, to name a few), aims are often included with practice materials. You can assume all of these aims have been tested with many learners. If you’re ever unsure, contact the author.

No luck? If you can’t find an aim for your pinpoint+, try:

Step 2: Get a sample
The best sample would include a range of performers who are competent in the skill area related to your pinpoint.

It’s good to start with professional adults for a few reasons:

  • Unless you work alone, there should be plenty of these in your workplace.
  • For many skills, it is easy enough to assume most adults are fluent in the behavior of interest.
  • Sampling a bunch of adults is quick and easy. You usually don’t have to spend much time explaining the task or helping them time and count their behaviors.
  • It can also be a nice way to interrupt a work day with 1-2 minutes of aim-setting timings. Have fun with it!


Another option is to look at performances from a sample of peers or near-peers to the performer for whom the pinpoint was selected. This can help you be sure your aims are appropriate for your learners. But be careful! Look for peers who are assumed to be masterful in the behavior of interest. We’re not looking for norms or averages because performing at the 50th, or even 80th percentile for one’s grade is nowhere near synonymous with fluency.

Helpful Hints

Here are a few additional considerations for setting aims.

Look out for fatigue (time matters!)
An important part of an aim is the terminal timing length (the optimal counting time at which a performance can be considered masterful). If the folks you are sampling start slowing down in the middle of the timing, this may be a pinpoint you want to practice at shorter timing lengths.

Consider the learning channel
Be sure to set an aim that is appropriate for the learning channel. For example, the aims I’ve used for math facts vary drastically depending on the learning channel:
    See-Say answers addition fact: 100-80 answers per minute
    See-Write answers addition fact: 80-60 answers per minute
    Hear-Say answers addition fact: 40-30 answers per minute

Watch out for potential ceilings
Ceilings are especially important to consider when working with young kids or those with physical limitations. Aims for targets that require writing, like math facts or writing words, should never be higher than a learner’s frequency of Free-Writes letters or numbers. Similarly, it may not be appropriate to set an aim for See-Say targets that is higher than a learner’s conversational speaking rate.

Remember what you are counting
Look back at your pinpoint+ for reminders about whether you are counting digits or answers, letters or words. In my experience, whenever an aim feels “not quite right,” it is usually because my aim and count don’t align.

Getting aims on a chart

Now that you’ve established an aim, get that aim on your chart!

There are two methods for displaying your aims on a Standard Celeration Chart. 

If you have a hard “deadline” for achieving fluent performance, the aim star is your best bet. On Chartlytics, you can enter the calendar days to aim when creating or assigning your pinpoint, and the aim star will be placed on that date on your chart.

If you’d prefer not to pick a date for reaching aim, you can simply enter your aim range, and the yellow band will appear across your entire chart.

Follow the data

As you work on the skill with your learner, keep an eye on celeration as well as the distance between current performance and the aim bar. This will help guide your precision decision making (should you continue, change, or complete).

Check for the indicators of fluency: Maintenance, Endurance, Stability, Application, and Generativity (MESAG). If you get all those before reaching aim, you may be able to move on. If you get to aim but aren’t getting these outcomes, consider modifying your aims.

Staying true to your aims

At some point in the process, you will doubt the aims you’ve set. You’ll think, “am I expecting too much?” You will consider changing your aims, and, as a result, lowering your expectations. Don’t!

“The ‘resource’ child has been too frequently branded with the reputation of ‘slow learner’. Teacher expectations very often follow suit. Fortunately, evidence contrary to that kind of thinking and feeling is being produced by those who ‘care enough to chart.’” -Bower & Meier, 1981, p.27

If you find yourself concerned about your aims being too high and your learners being too low, consult the literature on fluency and precision teaching. This has helped cure my second-guessing tendencies in the past. 

Learning about aims is just the tip of the Precision Teaching iceberg. If you are ready to dig deep into Precision Teaching then join us for our interactive live webinar.  

 

About the Author

Picture
Amy L. Evans, M.Ed, BCBA
Amy Evans is a Board Certified Behavior Analyst and a licensed special education teacher. Over the past nine years, Amy has worked in private learning centers, public school classrooms, and homeschool settings, combining the principles and tenets of Behavior Analysis, Precision Teaching, and Direct Instruction to solve educational and behavioral challenges. Amy runs her own tutoring business in Denver, Colorado and serves as Assistant Vice President of Finance for the Standard Celeration Society. She currently works with Chartlytics to provide instruction and ongoing support to behavior analysts and educators who are new to Precision Teaching.
Meet Amy on Linkedin: Amy Evans

References:

Beck, R., & Clement, R. (1991). The Great Falls Precision Teaching Project: A historical examination. Journal of Precision Teaching, 8(2), 8-12.

Binder, C. (1996). Behavioral fluency: Evolution of a new paradigm. The Behavior Analyst, 19, 163-197.

Bower, R. & Meier, K. (1981). Will the real slow learner please stand up? Journal of Precision Teaching, 11(2), 25-27.

Freeman, G., & Haughton, E. (1993a). Building reading fluency across the curriculum. Journal of Precision Teaching, 10, 29-30.

Freeman, G., & Haughton, E. (1993b). Handwriting fluency. Journal of Precision Teaching, 10, 31-32.

Haughton, E. C. (1972). Aims: Growing and sharing. In J. B. Jordan & L. S. Robbins (Eds.), Let’ s try doing something else kind of thing (pp. 20-39). Arlington, V A: Council for Exceptional Children.

Haughton, E. C. (1982). Considering standards. Journal of Precision Teaching, 3, 75-77.

Johnson, K. R., & Layng, T. V. J. (1996). On terms and procedures: Fluency. The Behavior Analyst, 19(2), 281-288.

Johnson, K.J. & Street, E.M. (2012). From the laboratory to the field and back again: Morningside Academy’s 32 years of improving students’ academic performance.The Behavior Analyst Today, 13(1), 20-40.

Koorland, M.A., Keel, M.C., & Ueberhorst, P. (1990). Setting Aims for Precision Learning. Teaching Exceptional Children, 22(3), 64-66.

Kubina, R. M. & Morrison, R.S. (2000). Fluency in Education. Behavior and Social Issues, 10, 83-99.

Kubina, R. M., & Yurich, K. K. L (2012).The Precision Teaching Book. Lemont, PA: Greatness Achieved.

Lindsley, O. R. (1990). Precision teaching: By teachers for children.Teaching Exceptional Children,22(3), 10-15.

Lindsley, O. R. (1995). Ten products of fluency.Journal of Precision Teaching and Celeration, 13, 2-11.

Mercer, C. D., Mercer, A. R., & Evans, S. (1982). The use of frequency in establishing instructional aims. Journal of Precision Teaching, 3(3), 57-63.

The 3 Reasons You Shouldn’t be in Special Education

7/26/2016
 
Picture

Statistically, you’re crazy!
There can really be only one reason you’re working in the special Education field . . .

You absolutely love it.

You love working with your students, you love seeing them grow, and you love feeling like you made an impact on someone’s life.

You care about them more than anyone else.

You do everything you can to create better education plans, interventions, and better results.

What are some of the things you do on a daily basis to make this kind of impact?

1. You give individual attention

You know every "learner knows best.” They deserve your attention and you give it to them, despite frustrating IEPs and clunky computer systems.


2. You won’t settle for less than the best.

There’s an easy way, for sure. But you make sure—everyday—that you’ve recorded their progress, graphed data on a chart, or written IEPs. It could make a HUGE difference for that kid!


3. You overcome daily obstacles

Everyday it’s something else. The internet goes down. A student acts out. A budget gets cut.

But you show up, everyday, with the same persevering attitude. You are a high-pressure, steam rolling, freight train who takes responsibility for their learners no matter what the circumstance. There’s no such thing as “can’t!"

So why hasn’t anyone recognized this?

And I don’t mean “why hasn’t anyone give you a pat on the back?"

I don’t mean a “great job today, Jill."

I mean, why hasn’t anyone said, “you’re a superhero! How can I help make your job easier?"

"How can I help you be even better?”

Let’s be honest:

There’s a gap in your life.

And it can’t be filled with fluffy comments, or fleeting pep-talks.

Too many teachers are being asked to submit mediocre IEPs, read confusing progress charts, and make decisions about interventions that are unclear, unproven, and downright backwards!

Think about it! You’re being asked to shape the behavior of special needs students AND get them to their academic targets!


If we want more successful teachers, we’re going to need:

1
An easier way to recognize and record behavior

We’re using subjective and vague descriptions for our IEPs and our data sheets. This is what goes right into our progress charts! How can we be sure we’re making the progress we say we are, if we’re not all on the same page about what “progress” actually means? It makes no sense to do all of this work and care so much if we can’t even make a confident decision at the end of it!
2
A faster and error-free way to track student progress

We’re typing numbers into spreadsheets, a known way to produce frequent human-errors after a hard day of work. And we’re making graphs that take time to create, explain, and share with others. (Anyone else tired of fussing with pivot tables in Excel?)

How can this process be so error-prone when the outcome is so important?


3
A clear and consistent way to communicate student progress to parents, learners, and supervisors.

Finally, when we get our graphs and reports, we each say something different about them.


No parent or supervisor has ever looked a progress chart and immediately understood what it meant.

It takes time to explain, and even then, many people aren’t sure they’re seeing what they think they’re seeing.

There are plenty of data collection programs out there, but none of them give us the one thing we need; the REASON we started collecting data in the first place!

None of them allow us to confidently know where our students are heading, how long it will take them to get there, and how to explain their progress to supervisors and parents.

None of them have given us the power to make decisions in the moment.

Now, THAT could change someone’s life.

And isn’t that why we all signed up for this in the first place?


So how do we empower special education teachers to change students’ lives?

Picture
 
Picture
 
Picture

Scientifically proven to raise test scores and inform your interventions

  • Supports self-directed learning
  • Maps to any curriculum, learning style, or mastery criteria
  • Performance History shows a reliable record of student data indicating current performance levels and trajectories

Picture

Make more confident interventions and write better IEPs.

  • Guides you to intervention decisions to meet short- and long-term IEP goals
  • Make intervention decisions like a strategic scientist of instruction.
  • Teach 3-5 times more objectives per student.

Picture

Built for the real world.

  • Email simple progress charts to parents and supervisors to quickly show snapshots of progress.
  • Automatic data backups mean you won’t lose important progress when you lose wifi.
  • Easy data entry means less time entering student data.

Picture

Make graphs less scary.

  • "Single-click" graphs means no fussing with pivot tables in Excel.
  • Simple trend lines accurately and quickly forecast student performance for informed IEP goal-setting.
  • Attach text, images, or video to charts to add context to behavior.

So what does this actually mean for students?

Let’s get under the hood for a second.

Special Education is all about fostering independence. It’s about enfranchising students so they have the skills they need to confidently move forward in their lives.

So what approach do we take to make impact? How do we identify which students are at risk for falling behind? How do we effect lasting change across so many unique individuals?

Let’s start with a model you’ve probably already seen, the Response to Intervention (RTI) approach.

RTI is designed for those who make decisions in both general and special education, that are informed by student performance data. For RTI to be effective, a few key components must be in place:

  • Scientifically proven classroom instruction
  • High quality, lightning fast student performance monitoring
  • Crystal clear instruction “tiers” to quickly differentiate unique students
  • Lots of parent involvement


Under the Individuals with Disabilities Education Improvement Act (IDEA, 2004), students fall into three tiers of education:

Tier 1 - 80-90% of students

Most students fall into tier 1, and should receive high-quality, scientifically validated teaching methods by qualified personnel. This removes the possibility that a learner’s performance is due to poor instruction.

Here’s where fast, effective assessments and decision-making make a huge difference. If a student is having difficulty, we want to identify what’s going on and what to do next as quickly as possible. If they have a learning disability, we don’t want to hold them up with poor assessments, we just want to get them the instruction they need to move forward.

Tier 2 - 5-10% of students

This is the “hot zone.” Students here have been identified as struggling learners and schools are deciding if they should push them back into tier 1, or move them to tier 3.

These are the students who are at risk for falling behind and will respond best to intensive instruction and practice.

This requires a school to precisely identify a student’s challenge, accurately monitor performance, forecast performance to predict achievement across the semester and make the best decision.

This also means schools need to communicate across their staff, and to their parents to make sure all stakeholders are up-to-date and on-board with the next steps.

Tier 3 - 1-5% of students

This is where individualized education plans are standard issue.

But in order for IEPs to be successful, they must break down target behaviors and academic achievement goals into small pieces.

Small enough that special education teachers can identify them instantly in a chaotic classroom, monitor them across the semester, make the right intervention decisions for each student, and communicate results to parents and supervisors.

This is critical, because these evaluations can affect a student’s eligibility for special education services under IDEA 2004. These evaluations also inform parents, who can request these services outside of the school.


This is precisely how we built the IMC measurement framework in Chartlytics.

Identify | Monitor | Communicate
Identify behavior. Monitor progress. Communicate results.

We saw too many “at risk” kids in Tier 2, whose difficulties weren’t being identified, whose progress wasn’t being monitored, and whose outcomes weren’t being communicated accurately to parents and supervisors.

This is heartbreaking, because we all know what it’s like to have a 5th grader who is reading at a 2nd grade level but who is expected to test against the common core standards set for 5th grade achievement.

That kid is going to be thinking “I must be stupid,” when in reality they just need more practice.
Picture

Let’s look at Suzie.

Suzie was a Tier 2 student until we applied the IMC measurement framework with Chartlytics.

She was reading words at 10 correct and 11 incorrect per minute. Obviously, at this rate, there is virtually no comprehension.


Within 30 sessions (405 minutes of instruction), she was reading brand new passages at 69 correct and 1 incorrect per minute.

This kind of growth is way above typical learning expectations.

Growth goals for 10-25 percentile readers are between .6 to 1 words per week (Hasbrouck & Tindal, 2006). But with Chartlytics, we’re seeing 1.2 to 5.1 words per week.

But how can that be?

This kind of growth is known as Generative Learning and it’s a result of a method called Frequency Building to a Performance Criteria (FBPC).

Basically, by identifying the core component skills of reading, and with focused practice with an instructor, Suzie was able to generate brand new skills.

In this case study, it looks like this:

Generative Learning

Picture
 

We’ve also found that this kind of learning is remarkably rewarding for students.

We’re now watching Tier 3 students reaching out to their teachers to see if their progress line has gone up this week.

Suddenly, the snacks and stickers fall to the wayside.

Learning is the reward.

Chartlytics is providing special education teachers with software that enables them to:
 

  • Precisely identify target behaviors and IEP goals
  • Quantitatively monitor progress
  • Confidently make intervention decisions
  • Clearly communicate IEP progress to supervisors and parents.


After 6 months,
you’ll be laughing at how you used to write IEPs . . .

You don’t need to change how you teach.
You don’t need extra time to enter data.
You don’t need to be a whiz with technology.

Schedule a free Chartlytics tour with a Celeration Ninja, today!

Functional Analysis Celeration Charting: Visual Analysis + Quantification

7/5/2016
 

Guest blog post by: Sal Ruiz, BCBA, Doctoral Candidate, The Pennsylvania State University.

Functional Analysis

Functional Analysis (FA) gives us an assessment to determine the function of challenging behavior. Without the ability to determine function, behavior analysts would have a difficult time creating and sustaining behavior change. Procedurally, FA includes the use of motivating operation, elimination of confounding variables in the environment, and the use of visual analysis as a decision-making tactic to determine function (Neef & Peterson, 2007). FA’s have seen widespread use since 1982 (see Hanley et al., 2003 for a comprehensive review). Since that time variations have been developed that modify procedures and measurement of target behavior.  For example, the use of Trial Based Functional Analysis (TBFA) has become more prevalent in the research literature (see Rispoli et al., 2014 for a comprehensive review). TBFA addresses many potential challenges that may prevent professionals in using FA to determine function. Visually, TBFA typically uses bar graphs to represent data. 

Visual Analysis

Visual Analysis has long been a hallmark of single subject work (Kazdin, 1982). The use of trend, level, and stability allows the user to determine if a functional relation has occurred. FA relies on level as a primary decision-making tactic. By examining if one condition occurs at a higher level than the other conditions behavior analysts can develop interventions that will create long lasting change and are matched to function. However, behavior may not always be the product of one function. For example, behavior can have multiply maintained or undifferentiated results. When behavior is multiply maintained how do we know how much more one function is responsible for the occurrence of behavior than another? If visual analysis produces undifferentiated results, what happens next?

(Figure 1: Example of a multiply maintained result FA from Lee, 2009)

(Figure 2: Example of an undifferentiated FA, Roane et al., 2013)

 

Undifferentiated Results

Undifferentiated Results pose a different set of problems. When the results of the assessment do not provide guidance on how to proceed, what should a practicing behavior analyst do? One option is a pairwise comparison, which involves conducting additional sessions that compare one function against the control condition. Another option would allow for a manipulation of procedures. Neither of which guarantee the ability to detect function. Undifferentiated results could occur for several reasons, however, something needs to occur. 

(Figure 3: Sample of a sequential view FACC)

 

The FACC

The Functional Analysis Celeration Chart (FACC) is a standardized graphical display that can show data in two ways. First, is a sequential display of the data (See Figure 3). The sequential view allows for the ability to detect carry over effects through visual analysis. Second, is condition grouped (See Figure 4). The condition grouped display allows for the ability to easily visualize level and bounce. However, carry-over effects will not be detected. Each view allows for different depictions of the data, while maintaining the ability to quantify the data. 

(Figure 4: Sample of a Conditioned Grouped FACC) 

 

Quantification of FA data

The FACC may help when FA data show multiply maintained or undifferentiated results. The FACC allows for the use of quantification of data to help assist in decision making. The FACC relies on level as the primary decision-making tactic and can compare each condition against the control, as well as, the other test conditions. Quantification adds a supplementary check to visual analysis, regardless of the results of the assessment. For example, matching the quantification of the level with visual analysis adds confidence to the decision of what function is maintaining the behavior. With multiply maintained behavior, the quantification can help distinguish what behavior is occurring more frequently and how much more frequently one behavior occurred than another per condition. Undifferentiated results can now quantify behavior and help with decision making. 

The use of visual analysis and level can help determine function in situations where visual analysis does not work well. Further, the use of bounce can act as another check to determine function. On the SCC bounce demonstrates control of a behavior. If a behavior has tight bounce then we can assume that the behavior is controlled (Kubina & Yurich, 2012). Applying that principle to the FACC can help eliminate a possible maintaining function. If the bounce is variable then we can’t say that the behavior is coming into contact with the correct contingencies. The organism is looking for reinforcement and does not know how to obtain it. 

Quantification is a powerful tool that can provide assistance when visual analysis is not enough. The ability to quantify can save time, increase confidence in decisions, and minimize exposure to reinforcing contingencies. The FACC can pair with variations of FA. Pairing a proven assessment and the power of quantification has the potential to help our learners decrease their challenging behaviors and teach a replacement behavior that will allow them to access the same reinforcement. 

About the Author

Sal Ruiz, BCBA, Doctoral Candidate, The Pennsylvania State University.
Sal Ruiz is a third year PhD. Candidate in the College of Special 
Education at The Pennsylvania State University. Research interests include Functional Analysis, The Standard Celeration Chart, and Challenging Behavior. Prior to beginning his doctoral studies, Sal was a Behavior Specialist in a public school in Northern New Jersey. He obtained his BCBA credential in August 2013 and is currently supervising those pursuing certification. Visit Sal on LinkedIn

References

The Hardest Part About Analyzing Your Data, Is Now Gone...

6/7/2016
 

Let's start with a look at Visual Analysis.

Cooper, Heron & Heward (2007, p. 149) say if you do visual analysis, you must look at three things.

  • Trends of data
  • The level of data
  • The extent in type of variability and data

And here is a very nice quote from Lane & Gast (2014, p. 445):

"You need to look at variability or stability, you need to look at level, and you need to look at trend."
 
Do I have to convince anybody that if you are doing visual analysis, these things are important?
 
You probably agree with me. This is just how we do our business.

Trend
Take a look at the first graph below. We have an ascending trend, then we have another ascending trend. How many would be convinced that the intervention was good? That the progress happened as a result of the intervention itself? Would you be suspicious?

Picture
 

You should be! The problem is the behavior is already going up. So if something's going up and you say, "I did this thing and now it's going up," who are you convincing? This is done completely with visual analysis. You don't need statistics for this. You can look at the data and see. This is our science.

Now look at the second graph. If it's flat and then it goes up, that's convincing. And the third graph? If we have something that's going down and our intervention makes it go up, we can convince people of our effectiveness. This is part of trend analysis.

Level
Let's just say we have the average in baseline, which is a 2. Then you do something, now the average is a 10. Do we have an effect? Yes! You went from 2 to 10.
 
You're doing this through visual analysis.

Variability
Imagine that you have data points that are bouncing around and those two lines below are showing you what's called the variability envelope.

Picture
 

On the left, although it went up, the variability envelope is the same. Has variability changed? No. On the right, it has.
 
So here's my question to you:

Where does visual analysis occur?

Laboratory settings. Maybe you do work in labs and do visual analysis there. Maybe you are doing applied work where, again, visual analysis is frequent. You did a thesis, you did a dissertation? You probably did visual analysis.

Now, here is a critical question:

What does visual analysis rely upon?

Graphs.

Think about this for one minute. All that stuff about trend, level, variability; if you have a compromised visual graphic, how good is your analysis? It will likely also be negatively affected. I can't tell you how important it is that we have a very good graph when we're making decisions. All of those things that I just shared with you; how we move our science forward, how we make applied decisions, how we affect lives. Everything is reliant upon a graph.
 
So would you believe no one has ever done a study, not once, ever, to see how well we actually make graphs? No one's ever done it. I don't know why. But I decided that's something I needed to do. More on that later...

If you're ahead of the curve and want to get started, Chartlytics enables you to create the most effective graphs for monitoring and changing behavior. Find out how it works here, or sign up for a free trial.  If you’re ready to really dig in and move at your own pace, head over to Precision Teaching University.

Also, tell us about your graphing pet peeves on Facebook and Twitter.

You're Doing Your Client a Disservice Unless You Learn These Principles of Graphing.

6/6/2016
 

Get the video here at Precision Teaching University 

Speech Transcript:

Emcee: Hello, everybody, welcome back. I'd like to introduce our next speaker, Rick [inaudible 00:00:16].

Rick: Yeah. Rick, please.

Emcee: Let's kind of [inaudible 00:00:23] settle down. Rick is a professor of special education at [inaudible 00:00:29] New York City. He teaches courses on methods for teaching reading, informal assessment, [inaudible 00:00:36] single case design. Rick conducts [inaudible from 00:00:40 to 00:00:49] special education journalists. He's was the past editor of The Journal of Precision Teaching Celeration. Let's welcome Rick.

Rick: Thank you. As you can tell by my playful slide, what I'm about to share with you today is received in one of two ways. One way, it becomes uncomfortable for some people, and the other way, it's very welcoming to some people. This research that I did gets at the heart of something all of us do every single day when it comes to understanding if we have an effect. And that's called visual analysis.

I submitted to major behavior analytic journals, and you would think that I was saying, "Let's go mentalism." They attacked my paper. Two reviewers just refused, they said, "You can't ask that question." They literally said, we couldn't ask my question. We rejected it based on them not being able to understand what we were asking, and I was incensed, so I mailed the editor, and the editor agreed with them.

And you would know these journals, you would know these people. But I get it. I get why this can strike a nerve. The good news is, my paper is published. But the only way I can get it published is if I went outside of the behavior analytic community. So I went to an ed psych journal and they published first go-around.

What is this controversial thing that I'm going to talk to you about today? It's graphing. And you can tell by the title here that things aren't so good. But before I get into that, let's talk about visual analysis. Visual analysis, this comes from. They say that visual analysis is the extent in type of variability and data, number one. The level of data, number two. And then trends of data. If you do the visual analysis, you must do these three things.

Here is another very nice quote, and this comes from Lane and Gast, and now you're talking about visual analysis. And they say that it's the cornerstone, the cornerstone of single case experimental design. And then they say, again, "You need to look at variability or stability, you need to look at level, and you need to look at trend." Do I have to convince anybody in here that if you are doing visual analysis, these things are important?

You probably all agree with me. I don't have this...all of our textbooks say that. This is just how we do our business. We take data...again, I'm talking about time series data here, and I'll talk about that soon. But if you're having time series data, you're looking at the variability, the level, and stability.
So take a look at this first graph. Here, we have an ascending trend. Then we have an ascending trend. How many would be convinced that the intervention was good? That the intervention happened as a result of its own? Would you be suspicious? Why?
Man: The market's going up.

Rick: Exactly. It's already going up. So if something's going up and then you say, "I did this thing and it's going up," who are you convincing? This is done completely with visual analysis. You don't need statistics for this. You can look at the data and see. This is our science. So here, if it's flat and then it goes up, that's convincing. If we have something that's going down and we want it to go up, we can convince people. This is part of trend analysis.

And then there's something called level. If you have ten data points, there's a way to figure out what's the average of those ten. And there's different ways...you could just take the arithmetical average where you add them all up and divide by ten. You could take the median, which is what a lot of people recommend. And then that would become your level or your average. You could take the geometric mean, which is what I would recommend that other people do.

But let's just say, here, we have the average and baseline, which is a 2. Then you do something, now the average is a 10. Do we have an effect? Heads up and down. Yes, we have an effect. You went from 2 to 10. On average, you have an impact. How about here? Yeah, at best, you could say, "We have the most modest, mild effect." That's not something you're going to write home to Mom about, right? Really non-impressive there.

You're doing this through visual analysis. And then the last part of this would be the stability or the variability of the data. And imagine that you have data points that are bouncing around, and those two lines right here are just showing you...that's called the variability envelope. Here, although it went up, the variability envelope is the same. Has variability changed? No.

But look here. Has variability changed? Again, visual analysis...this is what we all need to be doing. This is what our textbooks say. This is when you read anything in our prominent journals, this is what people are doing. Or at least they should be doing.

So here's my question to you. Where does visual analysis occur? Well, this is the first question. Laboratory settings. Many of you have worked in labs or maybe you do work in labs, we do visual analysis there. A lot of you are doing applied work. Visual analysis, frequent. You did a thesis, you did a dissertation? You probably did visual analysis.

How about this conference? You go and see speakers who are talking about time series, you go to ABAI and you look at the poster sessions...you're swimming in this data. Our journals, this is how we define treatment effects, intervention effects, visual analysis. In other words, everywhere, you're going to have these time series data.

Now, here is the thing that is a critical question. What does visual analysis rely upon?

Woman: Graphs?

Rick: Yes, thank you whoever said that over there. Visual displays. Graphs. Think about this for one minute. All that stuff about trend level variability; if you have a compromised visual graphic, how good is your analysis? It will likely also be negatively affected. I can't tell you how important it is that we have a very good graph that we're making decisions upon. All of those things that I just shared for you; how we move our science forward, how we make applied decisions, how we affect lives. Everything is reliant upon that graph.

So what did I do? I wanted to know how well do we do that as a field? Would you believe no one has ever done a study, not once, ever, to see how well do we actually do our graphs? No one's ever done it. I don't know why. But I decided that's something I need to do.

Now, if you're going to do a study about graphs, you have to all understand that there are rules for constructing your graphs. I didn't make up these rules. So don't get angry at me when I tell you what these rules are. A lot of people don't know about all these rules. You can go back to 1915 in the American Statistical Association...and by the way, when you go back in time, the American Statistical Association had some really interesting people. And their take back then, in the early 1900s on visual graphics, very different from what you would get from a modern-day statistician. Now, graphs are looked at as superfluous and, "Why do you need it if you have numbers?" But back then, it was a very different time.

Back in '38, there was the American Standards Association. This was a bunch of business leaders, a bunch of mechanical engineers, people from statistics. In '60 and '79, the National Standards Institute, American Society for Mechanical Engineers. '88, there's the Scientific Illustration Committee, and even the Department of the Army came up with rules. And of course, when you look at the ones that are through time, you can see that they reference other sources. So there are rules.

Where did these rules come from? These rules came from these people that designed the graphs. When you have a visual display, it's meant to do something. It's meant to tell you information. And depending on the type of visual display you're using, then there are rules for that construction.
How about in our field? Did any of you ever read this article right here by Parsons? This should be standard reading for everyone. It's in a chapter, and it's just a wonderful exposition of the rules and all of the things that we should be looking at for graphical display. Polling has nice information in his texts. Again, these are rules for graph construction. And of course, you've all probably read Cooper, Heron & Heward. Outside of our field...we didn't invent line graphs. We didn't do it. These three texts right here are people that are citing those sources that I shared with you, and there's more sources than that.

So I took all of these rules, number one, and I had to figure out, "What are all..." there's a lot of rules. In fact, if you went back in time, many people that would do graphs would send it off to a drafts person. A person highly skilled in the technical specifications of creating these graphs would create a graph. Now, with Excel, everybody does them. If you have access to a computer, you can whip up your own graphs.

How many of you were taught the rules of graphing? Where did you get those rules? From Cooper? Okay. So many of you, that was a lot of hands, which is excellent. You understand these rules. So I looked at these rules, and everything that is in Cooper, Heron & Heward is not everything that's in all the rules, right? A lot of them...the important ones are there. But there are more.

And so, our question was, "How well do these selected visual graphics that we find follow what's called the 'essential structure' and 'quality features'"? The essential structure means if you don't have these things, you don't have a time series graph, period. And then quality features would be other features that enhance the usefulness of you understanding your data.
I always get a kick out of people when they're negative about graphs, and they're like, "Oh, we had these numbers, and those are just very simple..." there's so much complexity and elegance when you have time series moving through time. People just don't understand that. But we do have that. These quality features are important.

We followed...there are procedures out there for selecting the journals that we pick, because we have a lot of behavioral journals, more are coming on the scene all the time. So Carr and Brittain [SP], Critchfield, and then myself and colleagues, we have these criteria. And that led us to selecting eleven behavioral journals.

Tell me if you've heard of these journals before. "Behavior Modification," "Behavior Therapy," "Child and Family Behavior Therapy." "Cognitive and Behavioral Practices"? Yes, it is a behavioral journal. "Journal of Applied Behavior Analysis." "Journal of Behavior Therapy and Experimental Psychiatry." "Education and Treatment of Children." "Journal of Behavioral Education." "Journal of the Experimental Analysis of Behavior." "Learning Behavior," this used to be called "Animal Learning and Behavior." "The Analysis of Verbal Behavior."

You've heard of those journals. If not all of them, probably most of them. Our field is not static. We have people doing applied work. We have people doing educational work. We have people doing clinical...we just have all...our rich science is moving into all of these different areas. And we need journals to be able to accommodate the things that we're interested in in talking about.

So what we did is we took all of these journals, and we went back to the day that they were started. Java, 1968. So that meant that every two years, we would take a random issue, and then that's how we came up with all of these graphs.

We used all line graphs. If you had dual vertical axes, we excluded it. If you had a nominal or ordinal unit, like, you named something on the vertical access, that was excluded. If you had a non-time unit, that was excluded. And of course, if it was logarithmically scaled, we excluded that. We ended up with 4,300 journals. That's a lot of graphs to go over and apply these to. And it was very difficult, but by gosh, that team of graduate students persevered, and they did it after about a year's worth of time. And now, I'm here to tell you about that.

So what are the results? Let's go over what some of these things that we looked at and what's some of the things that we found. When you have graphs, a line graph...and again, I'm talking about time series graphs. That word, "time series," what does that mean? It means that you collect data, and that data marches through time. That time can be minutes, it can be hours, it can be days. It's some time unit that your behavior is changing. Now, sometimes, we have colleagues that would share a scatter plot. That's not a time series data. If you have a bar chart, that's not a time series data. There's a lot of things we do that aren't necessarily time series. So I'm everything I'm talking to you about today deals with time series.

But what's the most popular graph that we use as a field? It's this. They're everywhere. If you have a line graph, it has to have, at minimum, a vertical and horizontal axis. I'm going to refer back to this frequently. This is a prototypical graph. And what you can see here is you have a vertical axis, and you have a horizontal access. Take a look at this graph. Do you notice what's missing? They decided, "We don't need a vertical axis.

Here's the results. Ninety-eight percent of those 4,300 graphs we looked had a vertical access. That's pretty good. Ninety-eight percent. Ninety-seven percent had a horizontal axis. That means a label...sorry, a horizontal axis. That meant that 2 to 3% error rate. You may be thinking as I present to you, "What's an acceptable error rate?" I don't know. No one's ever done this study before. What do you think? I'd like to hold us up to a standard of, "We should never have an error."
There's one other study that I looked at...you may be familiar with one of the most prominent journals in the world, which is called "Science." And this person named Cleveland went through science and he looked at what kind of errors...he wasn't looking at line graphs, but he [inaudible 00:17:25], and people make errors in graphs. And his rate was anywhere from 5 to 10%, depending on the things that he looked at. I don't know what the acceptable rate would be.
Vertical and horizontal axes have to have labels. When you look at the graph, you need to know "What's going on here?" So the way it works for line graphs, you have a quantitative amount here on the vertical axis, and then you have a unit of time. You have to have a unit of time. Because what are these graphs? They're time series. So you have to have a unit of time.

Here's a graph, and they have labels, but what happens there? How do you know what that is? What do you have to do? The only way you can know is if you go in the journal and actually read it, and hope that that information is in there. Tuki [SP], who is a great data scientist, he did a lot of stats, but he also talked a lot about visual display. He said, "A good graph should show all the information just when you look at." And that's true. And one of the fundamental parts of that is having labels on our graphs.

Another issue, which you may be familiar with, is sessions. Sessions is a...it's not a time unit. We included this because so many people use sessions. But that also occurs often. So what are the results here? Eighty-three percent of vertical axes had a label. That meant that 17% of the articles re-reviewed didn't have a vertical axis label. That's a pretty big number when you think about it. Thirty-one percent had a time unit on the label. Thirty-one percent. That meant that almost 70% of the articles we looked at either had no label at all, or they had what's called a non-time unit label. Almost 70%.

Have any of you heard of the National Institute of Standards and Technology? You may have. They have what's called the Office of Weights and Measures. And what they do is they're designed to do one thing, and that is to promote uniformity among measures. And you've seen this...you live your lives with the decisions that they make. And they define time as seconds, minutes, hours, days, weeks, months, and years. Guess what's missing?

Man: Sessions.

Rick: Yes. Sessions is not a unit of time. Don't use it. It's really amazing to me that we have all of these graphs with sessions. Doesn't it matter to you if we do an intervention and it happens in one week, one month, or one year? Don't you deserve that information? You do. I do. Our consumers do. When you use sessions, you just say, "You know what? You don't deserve that information. Because you have no way of knowing." You could say, hour sessions, hour non-consecutive sessions. Tell us the time unit. Very important.

Here's another aspect of creating journals, it's called the Proportional Construction Rule. Some people also call it the Two-Thirds Rule or the Three-Fourths Rule. When you look at how graphs are created, there's a vertical axis and there's a horizontal axis. When you look at those two axes and you put those together, do you see the proportion of the vertical to the horizontal? It needs to be two-thirds or three-fourths...that's 66 to 75% the size of that.

Why is that rule there? This top graph is created with the Proportional Construction Rule, and then you can see the slope of the line. And there are very technical reasons for why it should be two-thirds, which I won't get into here. But look what happens when you decide to stretch the vertical axis. What does it do with the slope? You've just exaggerated your data. You just said, "Oh, look at this. It's really good."

What happens if you squish, you compress the vertical axis? What does it do with the slope now? It depresses it. Think about that for a moment. We have a field, we have an army of people out there doing this stuff, like, silly-puttying their graphs. How is that good for any of us to be doing that? Here's a graph that I'll show you. There's the vertical, there's the horizontal, and what do you notice? That wasn't three-fourths.

What are the results? Only 15% of the graphs that we reviewed follow that rule. That means that 85% of the graphs...that means, really, nine in ten people are not making [inaudible 00:22:23]. If that's true, nine out of ten of you are not following this rule, if that generally holds up. That is something that we must address. If we're going to continue to use linear graphs...that's the second part of my talk. But if we're going to continue using these linear graphs, we must form our graphs properly. Otherwise, we're telling lies. We're exaggerating things.

And I'm not saying we're doing this on purpose. A lot of people don't know better. In fact, I've actually found some single case design books in some journals where they encourage people to do that. They say, "Oh, do you want to see the effect? Well, you should really stretch that out to show the effect." That's exaggerating our data. We should never be doing that. We need to understand what the data is telling us and react to that appropriately.

How about tick marks? This may seem mundane, but graphs need to have tick marks. Tick marks help you understand the data because data are occurring in time, and there's also a value. Tick marks help you orient...you form this understanding of what the data are telling you. Some people don't believe in using tick marks, as in reference to this article.

So when we looked at our graphs, 78% had tick marks. That meant that 22% of graphs either didn't have vertical tick marks, horizontal axis tick marks, or no tick marks at all. That may seem like a minor thing, but that's an error. You should not be constructing graphs that don't have tick marks.
Let's talk more about the tick marks. Did you know the tick marks are supposed to be pointed out? Not over, not in, out. It seems like, "Wow, I never knew that." Here, you can see the tick marks are pointed out. If you look at these tick marks here, you'll notice...and the reason why that rule is, is it helps to make a busy chart. Anything that detracts from you being able to extract the story of the data is not good. Tick mark's pointing in, it may seem minor, but that's a rule. So we looked at how many tick marks were pointing out. We found only 43% of the journals that we examined followed that rule. Which meant 57% of the graphs had tick marks that were pointing inward or above those axes.

Condition labels. What's happening with the data when you do it? Here, we have a condition, it's called baseline. Here, we have another condition, it's the intervention. You all do things; maybe you have a baseline, you have different interventions you do. If you're going to share that information with anybody, you need to let us know what you're doing. Otherwise, we don't know.

Here is a graph and if you look at it, you don't see anything. But if you read the article, what you find out is there's not only condition lines, there are no condition labels, either. So I went in...you have to go in, and you can see what they're doing. That's not a good graph if you don't have your condition labels, and you're not letting the person know what are the differences. Because otherwise, then what you're doing is you have to go back and forth between the text and that visual picture. That visual picture should be striking. It should speak to you. It should be evident what happened. That's what our science is built upon. That's what we all need to aspire to.

The results? Ninety-three percent of graphs had labels, which means 7% of those graphs didn't have labels. Here's something that should be very obvious, and I don't always get angry when I get rejected, because if you're in higher-ed...if you publish things, you're going to get rejected almost all the time. Very few people are going to have most of their things. So I'm okay with it, because that's part of science. You take your data, you give it to your peers, they critically review it, and then you move forward.

But the journal editor and these reviewers really bothered me because I'm so passionate about our field. And the fact that these big-name people were saying things like...here's why two of these people rejected my paper. They said, "There's no evidence for what you're saying." They said because I didn't have evidence that you have to have this construction...because I didn't have studies that showed what visual effect that had, they wouldn't review my article. I'm like, "That's another study. I didn't ask that question." There are these rules over here, and my question was, "Do people follow these rules?"

But one of the things that I argued, I said, "Okay, yes, that's something that should happen. There should be a whole arm of behavior analysis that studies just this graphics." And I argued, I said, "Okay, so you mean to tell me that I actually need data to convince you that our data points needed to be legible on a graph?" That's how ridiculous these reviewers, in my estimation, were. They just couldn't handle the fact that all of these results I'm sharing with you, they don't point out...it doesn't paint a good picture of what our graphs look like in journals.

So there's our data points. These are very clear. How clear are those data points? They're not even there. You have data paths is all you have. I want to see the data points. You should want to see the data points. So what are the results for...these are legible? Well, 86% of the data points were legible, which meant 14% of the data points reviewed, that's a big number, 14% of 4,300.
I did other things that...I'm not going to walk you through all...because you can start looking at comparing data and there's all kinds of rules that we should be following, and we did that in our paper. If any of you are interested, I'd be happy to share that with you.

But this impacted me, and it compels me to share this with anybody that'll listen. So thank you all for listening to this and not throwing tomatoes at me and walking out. You are...I feel if you're a scientist, you have to embrace the data. And if you don't like the data, that's okay. Science is all about...it's a marketplace of ideas. I do research on certain things. But you know what? If data came around and said, "That thing you're doing is not good," I wouldn't be doing that thing anymore. I'd be doing a new thing. That's what we have to do as scientists. We must accept what the scientific evidence tells us.

So where am I in my career? After doing this...and I'm going to continue doing research on graphs, you're probably saying, "Well, what is the solution to this?" There are problems and...



If you want more of Rick, but can’t make it to any of our webinars or events, check out a full online school set up and taught by Rick. http://www.precisionteachinguniversity.com/

 

Stop Hating on Percent

12/4/2015
 

I recently attended a conference where one of the speakers apologized for sharing a finding with percent. I will admit that apology surprised me.

Should we apologize for the product generated by multiplication? Do we say “sorry” when we receive the quotient from a division problem?

It seems odd to express regret for math. But here we had a professional doing just that.

Picture
 

Percent
Percent simply refers to a notation for hundredths. A percentage applies to the number obtained by finding the percent of another number.

Example 1: I did a random sample and counted 2 people out of 100 have red hair. Therefore, 2% of the people in my sample have red hair. 

Example 2: First grader Sarah spelled 8 words correctly out of 10. Sarah spelled 80% correct words on her spelling test.

Example 3. For the “Biggest loser contest” at work, Mike weighed 220 pounds and lost 20 pounds (final weight 200). Amy weighed 115 pounds and lost 15 pounds (final weight 100 pounds). Mike had a 9% reduction in total weight. However, Amy had a 13% weight reduction and won the contest.

Information

The percentages in the examples above give useful information. No reason at all to apologize for using them. Then why all the fuss? Wait, did I just use “fuss?” (Rick’s mental note, update references and never use “fuss “again, it sounds like I live in the 50s).

Saying "2% of a sample has red hair" quickly communicates a relationship. Namely, 2% tells us we have a small number of people with red hair.

Likewise, Sarah spelling 80% of her words correctly also offers useful information. The percent value quickly and simply conveys size or scale.

A further advantage occurs when examining the Biggest Loser Contest (example 3). If the contest just looked at the absolute amount of weight lost, Mike would have won. But Mike weighed more to start with. Amy losing 13% percent of her weight compared to Mike’s 9% means when comparing relative weight loss, Amy did better.

As we have seen, no reason to become upset at the percentages. They do what they do. But at times, percentages can pose problems in certain situations.

The Problem with Percentages for Time-Series Data

In Education and Psychology, many practitioners measure behavior over time (produces time-series data). Time-series analyses involve carefully measuring behavior over a period of time.

Example 1: First grader Sarah spelled 8 questions words correctly out of 10 (80%) on Monday. Tuesday through Friday Sarah had the following percentages: 80%, 70%, 90%, 90%.

Example 2: Fred and Jill want to stop smoking. Fred recorded the following percentage decreases of cigs smoked using a nicotine patch: Monday 10% , Tuesday 12%, Wednesday 13%.

Jill used the cold turkey method (which involved lots of encouragement from her friends). Her reduction for Monday, Tuesday, and Wednesday respectively: 6%, 6%, 8%.

With time-series data, professionals (e.g., teachers, school psychologists, behavior analysts, psychologists) need the most precise information they can bring to bear to understand the effects of intervention on behavior.

Adding information to Sarah’s spelling performance demonstrates a problem. Look at her data in time:

Monday: 8 correct, 2 incorrect in 40 seconds
Tuesday: 8 correct, 2 incorrect in 42 seconds
Wednesday: 7 correct, 3 incorrect in 39 seconds
Thursday: 9 correct, 1 incorrect in 55 seconds
Friday: 9 correct, 1 incorrect in 56 seconds

While Sarah improved her words spelled correctly, she did so at the expense of time. It took Sarah much longer to spell more words correctly. Percentage completely ignores the time element.

If time matters, and it should to everyone serious about behavior change, ignoring how long it takes to perform a behavior will lead to less effective interpretation and subsequent decisions.

What about Fred and Jill? Again we have a problem. Percentage only tells us the size or scale of each measure. We don’t know the difference between how many cigarettes Fred and Jill smoked. If Fred had a pack a week problem whereas Jill smoked a pack a day, Jill’s percentage decrease could dwarf Fred’s in terms of the magnitude of cigarettes not smoked (absolute amount of change). Therefore, how can we know if the nicotine patch or the cold turkey method work better without more precise numbers?

Conclusion

Percentage has its place in the world of math and can help people understand some phenomena with a number. Yet in other situations, such as an intensive analysis of behavior with time-series data, percentage hides important features of behavior change. Understanding when to use percentage and when not to will facilitate a more productive analysis of data. As the famous Psychologist Alfred Adler said, “Mathematics is pure language - the language of science.” Let’s make sure we always speak as clearly as possible!

Using Chartlytics for navigating social behavior outcomes

10/20/2015
 

Problem behavior requires the best applied science available - applied behavior analysis (ABA). ABA developed from B. F. Skinner and his experimental work with animals in tightly controlled laboratory settings. 

Picture
 

Figure 1: The man himself B. F. Skinner.

Skinner observed that animals (e.g., pigeons) responded in lawful ways to a particular set of events. For example, when a light came on, if the pigeon pecked a disk, the pigeon immediately received a food pellet. In the future, when the pigeon entered the experimental chamber it would behave in a similar manner -when the light came on, the pigeon pecked the disk and received food. Skinner called his discovery positive reinforcement. 

It turns out all the experimental findings Skinner and his colleagues made in the laboratory also applied to humans. Many people have heard of “time out.” But the time out procedure came straight out of the experimental chamber and works on animals the same the way it works on humans. Of course that assumes people actually do time out correctly! Sadly some people use distorted procedures of time out which rarely work as intended.

People also warped Skinner’s advice on using punishment; he adamantly warned against it! Skinner and his colleagues discovered what punishment procedures do. As a result of seeing the harmful effects, Skinner spent his entire career encouraging people to use positive reinforcement and other non-punitive measures to change behavior.

ABA grew from its humble laboratory beginnings as many practitioners wanted to harness the powerful science of behavior (i.e., Applied Behavior Analysis) to better humankind. Flash forward to today and an embarrassment of behavior change riches has now made its way into journals, parenting magazines, conferences, classrooms, board rooms and even popular media.

Old School Experimental Behavior Analysts used Standard Graphics

The methods of ABA rest on careful observation and experimentation. Therefore, many people call behavior analysis the science of behavior. But like any science, progress depends on the quality and precision of a measurement system. 

B. F. Skinner and his contemporaries had the advantage of a sensitive monitoring device providing standard real time data. Everyone who used a “cumulative recorder” would see data forming distinctive patterns. Just like going to a doctor who interprets standard views of heart activity on ECGs, Skinner and his crew quickly deduced what specific change patterns meant.

Skinner loved standard graphs and said the following, “… the curve revealed things in the rate of responding, and in changes in that rate, which would certainly otherwise have been missed” (Skinner, 1956, p. 225). Without a standard view and a standard data metric called frequency (or rate), ABA would not exist today.

New School Applied Behavior Analysts used Nonstandard Graphics

Along the way to developing ABA almost all of the practitioners and experimenters abandoned standard graphics like the cumulative record in favor of nonstandard graphics like the nonstandard linear graph. If that sounds like one of those slasher flicks where the guy slowly walks through a rickety old house with creepy music playing, you can guess the outcome. Once everyone started making their own graphs, everyone started making them differently (nonstandard means exactly that, no standards in graph construction).

However, one of Skinner’s graduate students (who later would go on to make amazing contributions) Ogden R. Lindsley took the lessons he learned from his dear ole mentor and created a standard visual display called the Standard Celeration Chart (SCC). What Lindsley offered the world, and especially behavior analysts, may one day completely change how people measure and view data.

Picture
 

Figure 1. A Standard Celeration Chart created by Ogden Lindsley.

Lindsley, like the other practitioners of ABA, wanted to change the world for the better. He created a standard ratio chart called the Standard Celeration Chart. The following sections describe how practicing behavior analysts can benefit from monitoring social behaviors in need of change (e.g., calling out in class, hitting others, stealing food, self injurious behaviors) on the SCC.

The Scope of Features

The following infographic displays a number of visual features offered by Standard Celeration Chart. The discernible properties of behavior appear to SCC users helping elevate the practice of ABA. Those using nonstandard linear graphics not only have none of the visible characteristics of behavior available to them, but must contend with the chaos of nonstandardization (widely variable sizes of axes, scalings, scale labels, and graphical symbols).

Picture
 

Figure 2. Twenty-six visual features of behavior provided by the SCC.

While covering all of the visual features falls beyond the scope of the present blog post, I will highlight a few essential for ABA.

Real time charting. When we visit the doctor with either an injury or illness, we ask the doctor how long the treatment will take. We all want to know how long we must suffer the pain or inconvenience caused by malady or harm.

The SCC has four different charts with different time scales. The daily SCC, for instance, presents data in the scale of days. Behavior analysts who use the daily SCC will see how the intervention they applied to behavior changes across each week (seven days = a “celeration period”).

If more behavior analysts used the SCC we would know generally how long procedures such as time out, positive reinforcement derived interventions, or differential reinforcement take to change behavior. 

In other words, behavior analysts could tell parents how long an intervention would likely take. Furthermore, real time charting shows the effects of illness, vacation days, and time spent away from an intervention.

Celeration line. Skinner and his colleagues used frequency or rate (the count or number of behaviors over a specific time period) to measure behavior. Lindsley and Precision Teachers (Lindsley started a measurement system called Precision Teaching) also used frequency to measure behavior. Placing successive frequencies across time gave rise to a new, powerful behavioral measure called celeration.

The celeration line shows the direction and speed of behavior change. The steeper the celeration line the faster the behavior has changed. Applied behavior analysts who use celeration lines can see how fast a pinpointed behavior such as “calls out answer in class without permission” changes as a result of an intervention. 

In Applied Behavior Analysis a hallmark of the science resides in a trait called “Effective.” According to the authors, Effective means “If the application of behavioral techniques does not produce large enough effects for practical value, then the application has failed” (Baer, Wolf & Risley, 1968, p. 96). 

A celeration line flashes a red or green light in the face of the behavior analyst. Effective interventions change behaviors in the right direction and do so quickly. Without celeration lines and real time behavior monitoring, behavior analysts have no way of knowing, or communicating to others, the precise speed of change of any intervention.

Using the celeration line also permits behavior analysts to match up individual interventions and their efficiency. On a daily Standard Celeration Chart a x1.5 celeration value (the quantification of the celeration line) means the behavior grew 50% each week. Behavior analysts could report the effects to parents, teachers, insurance companies, and other people who have a vested interest in rate of behavior change.

Projected Celeration Line

The celeration line becomes possible because of the special architecture of the Standard Celeration Chart. The vertical ratio scale shows behavior changing by equal ratios instead of equal intervals (as they do on linear graphs). 

The properties of the ratio scale result in very nice (accurate) projections of future behavior. Straight line projections on ratio charts like the SCC turn into curvilinear lines on linear graphs. The ability to accurately project the future course of behavior facilitates the analysis, evaluation, and planning of interventions.

Bounce

Lindsley used the plain English word “bounce” to communicate the concept of “variability” to teachers, parents, and students. When measured across time, behavior bounces around. The bounce always occurs because humans never perform a behavior the exact same way. Those variations or bounce may take place very slightly (low bounce) or quite significantly (high bounce).

The degree of bounce directly reflects the influence of an intervention. As an example, examine the charted data of two kindergartners in the same class. The teacher does have an intervention; classroom rules which say students cannot leave their seats without permission.

Picture
 

Figure 3. Bounce the SCC.

Which rule exerts greater influence on each student? The bounce for student 2 shows consistency and regularity. By contrast the bounce of student 1 has a much larger envelope. The larger bounce means student 1 shows much more inconsistency; one day he may get out of seat once, whereas another day he may stand up 5 times. 

The SCC not only shows bounce clearly, it also allows behavior analysts to explicitly quantify the degree of influence. Student 2 has a bounce of x2. Student 1 has a bounce of x5. The x2 bounce makes obvious the degree of influence the intervention has on out of seat behavior. Student 1’s x5 bounce has more variability indicating the classroom rules (intervention) do not have very strong control (influence) over the targeted behavior. In fact student 1 has 2.5 times more behavioral variability when compared to student 2. 

Conclusion

The previous three attributes of the the SCC markedly enhance the power of behavior analysts' ability to detect meaningful changes. Furthermore, the analysis, interpretation, and communication of applied data outcomes greatly improves.

References

Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91–97. doi: 10.1901/jaba.1968.1-91 .

Skinner, B. F. (1956). A case study in scientific method. American Psychologist, 11, 221–233.

Training the Big 6 (and becoming a Hero) with Chartlytics

9/23/2015
 

Hello fellow charters,

My name is Simon Dejardin. I’m a BCBA in France, in the Paris suburbs. I supervise an ABA program for adolescents with ASD.

Last November, I welcomed a 14-year-old girl named Cecilia. She has a rare genetic syndrome called Potocki-Lupski. The syndrome includes physical characteristics such as exotropia, low muscle tone, scoliosis, and many others that can cause health complication. The syndrome is also associated with a slew of behavioral symptoms such as repetitive or stereotypical behaviors, wandering, speech delays, physical aggression, and self-mutilation.

When I met Cecilia, I had to make decisions about her curriculum and IEP goals. I decided to work primarily on verbal behavior. Why? First, she was 14 and had very few functional communication skills. Second, a functional assessment indicated her aggressive behaviors were maintained by positive reinforcement (edibles and tangibles) and negative reinforcement (aversive situations).

When she came to us, Cecilia used PECS. But when my staff tried to communicate with her we quickly saw that she exchanged only three pictures, and engaged in frequent response scrolling, and she almost never looked at the pictures! So we decided to try something different.

My next thought was to try to teach her to sign.  We picked three items that she liked: cookies, music, candies. Again, we quickly found ourselves at a dead end - her motor skills and visual tracking of items were too poor.

So we were back to picture exchange, except this time we used large, authentic photographs. Again, we faced the motor and visual tracking issues. I thought to resolve the problem by building Cecilia’s fluency with the Big 6: reach, point, touch, grasp, place, and release. If she could perform the 6 motor movements fluently, all the prerequisites would be present to teach her to exchange pictures with someone else.

    Free-reach: We placed items on a table and measured the frequency of the free-reach preferred items, moving them as soon as she reached them. She didn’t reach the performance standard (or frequency goal) for this movement cycle (MC) but improved her performance by x2 in three weeks.

    Free-point: We implemented forced choice with two items, a preferred one and a neutral item (a sock). We asked her, “What do you want?” and prompted her to point at the preferred item as soon as she started to reach it.

    Free-touch: We worked on the MC by using a clicker. Shaping with a clicker is particularly helpful with frequency building because an auditory “click” is immediate, unintrusive, and when paired with a primary reinforcer, very powerful to teach new skills. We presented a target and clicked each time she touched it.

    Free-grasp/free-place/free-release: I chose to work on the 3 MCs at the same time by teaching Cecilia to grasp a coin from a box, place it above a money box, and release the coin into the money box. We used a clicker to reinforce every successful attempt.

I attribute the success of this first phase of treatment to the combination of Precision Teaching and shaping with a clicker (also known as clicker training). I think this is one of the most powerful combinations of tools we can find in behavior analytic treatments (the second, in my opinion, is the combination between Precision Teaching and Direct Instruction).

Using Chartlytics was a great bonus compared to traditional graphing. Specifically, interns who didn’t know about Precision Teaching before the intervention were able to make decisions regarding Cecilia’s performance on daily basis.

Picture
 

Figure 1. A Standard Celeration Chart generated by Chartlytics.

Picture
 

Figure 2. A second Standard Celeration Chart generated by Chartlytics.

Now we were ready!

With the Big 6 in her repertoire, Cecilia was prepared to reach toward a picture, touch it, grasp it, place it over an adult’s hand and release it.

After the intervention, motor skills and visual tracking were no longer a problem, so we could move toward a more traditional mand training. Each time Cecilia reached an item, we redirected her to the correct picture with a physical prompt. After some time, we successfully faded use of the physical prompt as well as the clicker.

Presently, Cecilia is able to exchange almost twenty small-sized pictures to ask for edibles, tangibles, and removal of aversive stimulations.

The fine motor movement cycles Cecilia developed through practicing the Big 6 appear to have generalized to other situations, giving her access to new activities. Now she can participate with her peers in daily activities such as bringing in the mail and placing cutlery in the dishwasher after a meal. We have seen dramatic improvements in her daily living skills, particularly in dressing: buttoning, zipping, and fastening Velcro.

We would not have achieved these outcomes so quickly without frequency building and use of the Standard Celeration Chart.  Chartlytics made it particularly easy on my staff. After setting up all the inputs, they could use the application for data recording and decision making without supervision or additional training.  My administrators quickly understood the importance of the Chartlytics application and allowed me to sign up the organization for an account.  I thank them for that.

Finally, I thank Cecilia for becoming my Big 6 Hero.

Subscribe to Email Updates