You put in the hours observing, interviewing, and assessing directly. You leave the team an ABC sheet, a scatter plot, or dare I say, something more complex for a week or two. Upon your return, you see only 3 data entries (if you’re lucky).
Getting your team to take data consistently is one of the most common frustrations practitioners bring up, and most of the advice out there lands in the same place: train them better, remind them more, add it to supervision. That stuff only helps so much. If your team isn’t taking data reliably, the data collection system itself is usually the real problem - and that’s on you.
The system is designed for you, not them
When a BCBA designs a data sheet, they’re thinking about what they need to do their analysis. Interval breakdowns, multiple targets, topography notes, duration fields, antecedents, consequences - the list goes on. I don’t blame you, it makes sense from an analysis standpoint.
But support staff and caregivers trying to run a program with an active kid in front of them aren’t thinking about your analysis. They’re thinking about not losing track of what just happened while also delivering the next trial and managing everything else in the room. If your data sheet requires them to stop and think, they’re going to stop taking data.
The question worth asking: can your team record a data point in under three seconds without breaking momentum? If the honest answer is no, the sheet is too complicated for the setting it’s being used in.
Complexity compounds in the field
A data system that feels manageable in a planning meeting falls apart fast in a busy session. Every extra column, every abbreviation that requires memory, every moment where the team member has to decide which box to mark - that’s friction. Friction accumulates. Eventually the path of least resistance is writing something plausible at the end of the session rather than recording in real time.
This means this is a design problem.
Data is a reflection of the system, not the staff
Another common reason team members are hesitant to take data is because the data isn’t “pretty” - it’s full of errors, X’s, and incorrect answers. Some people feel that reflects poorly on them, as if they are a bad teacher. So they either record the data incorrectly, or they don’t take it at all. I don’t fault them for it. After all, the way we behave is a product of past consequences. It just means we need to help them reframe things: a data sheet with some errors is just as if not more informative than a data sheet of only checkmarks.
What actually helps
- Simplify the data sheet first. Lots of whitespace to keep things easy to scan. Large font size to keep things readable.
- Match the recording method to the behaviour. Frequency counts work for discrete behaviors with a clear start and stop. Interval recording works when the behavior is continuous or hard to count discretely. Using the wrong method creates ambiguity, and ambiguity creates inconsistent data.
- Brief your team on the why. Not a lecture - just a sentence. “We’re tracking this because we want to know if the new antecedent strategy is working.” Staff who understand what the data is for are more likely to take it seriously than staff who are filling out a form because the BCBA said so.
- Build the recording moment into the session structure. If you expect data to be taken at a natural pause - end of a trial block, end of an activity - that moment needs to actually exist in how the session is structured. If sessions or activities are back-to-back without any built-in transition time, expecting clean data at the end is optimistic.
The supervision angle
Meet regularly with the team early on. This should be a familiar process: early in learning, prompt frequently. As success is achieved, fade the prompts. We do this with our learners, and we can do this with our team members.
Reviewing data with the team accomplishes two things. The obvious one is catching problems. The less obvious one is signaling that the data actually matters to someone. When data gets collected and never discussed, staff notice. The implicit message is that it’s paperwork - and people treat paperwork accordingly.
Even a brief data check at the start of supervision - “this looked like a good week for X, did that match what you were seeing?” - closes that loop. It doesn’t have to be a deep analysis every time.
The harder conversation
Sometimes the data isn’t coming in because the target itself isn’t well-defined. Vague operational definitions create judgment calls on every trial, and team members quietly stop making those calls because they’re not confident they’re making them correctly. If you’re getting inconsistent data across staff on the same target, that’s worth checking before you assume the problem is follow-through.
Clean data starts with a definition that two different people would apply the same way in the same moment. If you’re not sure yours clears that bar, test it - describe a scenario to your team member and ask them whether they’d score it. Their answer will tell you a lot.
Getting your team to take data isn’t really about the team. It’s about building a system they can actually use in the conditions they’re actually working in. Simpler sheets, clearer definitions, recording methods matched to the behaviour, and supervision that signals the data means something.
If you’ve been fighting the same data collection battle for months, it’s worth stepping back and asking whether the system is the problem before you conclude the people are.