Open app
CBTself-monitoringtherapy

How reviewing your log is a therapeutic act

20 April 2026 8 min

There's a pattern in CBT sessions that almost never varies. The session begins with a review of the self-monitoring records the client kept since the previous appointment. Not a quick glance , a deliberate, collaborative examination of what the data shows. What happened? What preceded it? What followed? Were there surprises? Does the pattern match the working hypothesis, or does it suggest something different?

The clinical literature is firm on this point: "self-monitoring records should be routinely reviewed in each session and the data be used to help the client and therapist collaborate, develop formulations and plan interventions" (Korotitsch & Nelson-Gray, 1999). The data that goes unreviewed is data that does half the work it could do. The reviewing is not incidental , it's where the clinical value is realised.

When you don't have a therapist, you become the reviewer. This is not a compromise position. It's a legitimate version of the practice , and there are specific ways to do it well.

What a good data review looks for

A CBT therapist reviewing a client's self-monitoring record is not just checking what was consumed. They're looking for the pattern , the relationships between variables that reveal the mechanisms governing the behaviour. Specifically:

What preceded the highest-consumption days? Look at the two or three days with the most units logged. What was the mood rating that day? What time did the drinking start? Was it a weekday or weekend? Was anything notable happening in the notes? The antecedents of the heaviest days are usually more informative than the average.

What are the emotion-to-behaviour correlations? Scroll the mood scores for the days before heavier drinking days. Is there a consistent direction? Low mood predicting higher consumption later that day? Anxiety scores correlating with evening drink counts? The correlation between emotional state and subsequent behaviour is one of the most consistently useful things the data shows , and it's almost never visible without the data, because the time gap between mood earlier in the day and drinking in the evening is enough to obscure the connection in memory.

What does the "after" picture look like? Review the sleep quality scores and next-day moods for the nights after heavier drinking. Is there a consistent cost? The contrast between the immediate consequence (the relief, the relaxation) and the following-day consequence (the sleep quality, the energy, the mood) is often where the data is most persuasive , not because it tells you what to do, but because it makes the actual cost visible in a way that felt costs don't.

What are the consumption-free days like? If there are days or periods without drinking in the log, what do the sleep scores look like? The mood scores? These provide the counterfactual , the comparison case against which the drinking days are being measured.

Is there a trend over time? Zooming out from individual days: is consumption stable, increasing, or decreasing across the logged period? Is the trend different by week of the month, or by season, or following particular kinds of events? A trend is harder to see from inside than from the data, and it's one of the things that makes longitudinal tracking more useful than any individual data point.

The questions a therapist would ask

In a CBT session, the data review is not passive. The therapist uses what the data shows to ask specific questions that deepen the client's understanding of the pattern. You can ask these of yourself.

"That was your highest week. What was happening?" The week with the most units , not the most dramatic night, but the week , usually has a context. Work pressure, a relationship difficulty, a change in routine, a period of isolation. Identifying that context doesn't explain the drinking away; it locates the mechanism.

"What surprised you about this?" The data almost always shows something the person didn't expect. The total that was higher than estimated. The Monday nights that show up in the data unexpectedly. The correlation between a particular emotional state and a particular behaviour that felt unconnected. The surprise is the place where the data is challenging the mental model , and that challenge is where the clinical work happens.

"What did you make of the sleep scores after drinking nights?" This question asks the person to examine the consequence data against their belief about what the drinking was doing. If the belief was "I drink to help with sleep" and the data shows sleep scores of 4 after drinking nights versus 7 after alcohol-free nights, the data and the belief are in direct conflict. What does the person make of that? The question doesn't require an answer. It requires the examination.

"What's the pattern you didn't know was there before you looked?" This is the closing question of a useful data review , the pattern that's now visible in the log that was invisible before the log existed. This is the specific contribution of self-monitoring that no amount of reflection or conversation can produce: the pattern you couldn't see from the inside, now legible in the data.

The emotional experience of reviewing

It's worth acknowledging that reviewing accurate data about your own substance use is not always a comfortable experience. The gap between the mental model and the record can be significant. The sleep cost, documented across multiple entries, can be confronting. The weekly total, calculated from the log rather than from memory, sometimes lands heavily.

CBT is not a comfortable process in this respect , it's specifically designed to challenge the distortions and avoidances that maintain problems by keeping them invisible. The discomfort of seeing accurate data is not a problem with the data. It's the data working.

The appropriate response to a confronting data review is not shame and not despair. In CBT terms, it's curiosity. The data is showing you what's happening. That's useful, regardless of what it shows. The therapist's job , and in self-directed monitoring, your job , is to approach the data with the stance of an investigator rather than a judge: what does this mean? what does it suggest? what would I want to test next?

Making the review a practice

The clinical recommendation is clear: self-monitoring data should be reviewed regularly, not allowed to accumulate unexamined. In a CBT context this happens every session , weekly or fortnightly. For self-directed monitoring, a weekly review practice is a reasonable approximation.

This doesn't need to be long. Twenty minutes, once a week. Look at the data from the previous seven days. Ask the questions above. Note anything that surprises you or doesn't fit the mental model. Bring the same curiosity to it that a good clinician would bring.

The data accumulates whether you review it or not. The value is realised when you look at it.


ayodee surfaces your patterns automatically , weekly trends, mood correlations, consumption over time. The data is there. The review is the part you do. Anonymous, no account needed.

References Korotitsch, W.J., & Nelson-Gray, R.O. (1999). An overview of self-monitoring research in assessment and treatment. Psychological Assessment, 11(4), 415.

Cohen, J.S. et al. (2013). Using self-monitoring: implementation of collaborative empiricism in cognitive-behavioral therapy. Cognitive and Behavioral Practice, 20(4), 419–428.

Persons, J.B. (2008). The Case Formulation Approach to Cognitive-Behavior Therapy. Guilford Press.

All articles