As a graduate course instructor, I always enjoy getting emails from former students asking for assistance with a particular client case. It’s rewarding to work with new professionals as they recall what they learned in a classroom, and are now asking analytical questions about how to use that knowledge to help a real human.

Every now and then, I get a question like this (borrowed from a recent r/slp reddit post):

So when the student does a phrase repetition ( “Then they- Th-then they went to see them” ) I know I’m only supposed to count the non-stuttered syllables, so it would be 6 syllables for that sentence. But I’m confusing myself on the disfluency count…since they repeated 2 words would the disfluency count for this sentence be 2/6 or just 1/6 since it’s just one stuttering occurrence? And does it matter at all that on one of the words they did an initial sound rep?

An SLP seeking help

To be clear, I do not fault former students for asking. Many job settings require* some sort of fluency measurement tool, such as the SSI (Stuttering Severity Instrument, referenced here). 

Confession: I have never used the SSI. Well, maybe I did once or twice when I was in graduate school, but I cannot recall. I don’t actually know what the rules are for calculating disfluencies using that tool. Also, I haven’t even done a % syllables stuttered (%SS) count since…a lot of years. 

Why? Because when it comes to helping a person who stutters (PWS) with their communication, counting their disfluencies is pretty clinically irrelevant, and can easily set you down the path of “bad therapy”. Saying a person is 24% disfluent is about equivalent to saying a person is 24% autistic. It is both absurd and offensive. Just this month, the Journal of Speech, Language, and Hearing Research published an article by Tichenor, Constantino and Yaruss stating:

The term fluency, as it is typically used, is not inclusive of all people who stutter or fully representative of the stuttering experience, encourages the use of misleading measurement, constrains the subjective experience of stuttering within a false binary categorization, and perpetuates a cycle of stigma that is detrimental to many people who stutter.

Tichenor SE, Constantino C, Yaruss JS. A Point of View About Fluency. J Speech Lang Hear Res. 2022 Jan 4:1-8. doi: 10.1044/2021_JSLHR-21-00342. Epub ahead of print. PMID: 34982943.

So. You have a stuttering evaluation on Monday. What do you do about this?

In this post, I will present a brief introduction to:

  1. Why disfluency counts are clinically irrelevant at best, and harmful at worst
  2. How to record physical components of stuttering in clinically relevant and helpful ways
  3. How to stop using outdated and harmful disfluency count tools at your place of work

Why are disfluency counts irrelevant?

Stuttering is highly variable

The only consistent thing about stuttering, is the fact that it is inconsistent.” The phenomenon of stuttering is as diverse as people who stutter themselves, but this statement is just about the only one that seems to hold true for everyone. A person can have no stuttering on Monday, and wake up Tuesday barely able to get a word out. Why? Because they stutter, and that’s how stuttering works.

This presents a problem for any tool designed to measure speech fluency/disfluency, particularly if the intent is to correlate that severity rating with a diagnostic statement. “Ali demonstrated 14% disfluencies, earning them a moderate severity rating.” (me: is 14% moderate? IDK. Anyways, moving on.

That statement and measurement is true and accurate, for the 2-5 minutes of speech that was used to take the speech sample. But what about the 2-5 minutes just before that? And what about the 2-5 minutes after that? You have diagnosed severity for a moment in time that no longer exists, unless you have the powers of Dr. Strange. 

In situations where I’ve been compelled to write a generalized diagnostic statement specifying severity, in reference to a speech sample, I write it as follows: 

Ali demonstrated 14% disfluencies, earning them a moderate severity rating at the time of the evaluation. Based on additional evaluation information, actual severity ranges from mild to profound, dependent on contextual communication factors.

If you are a well-trained clinician, you might be thinking to yourself that a severity rating ranging from “mild to profound” is so broad that it becomes absurd and useless. You would be correct. However, the above verbiage is a more accurate clinical description of the person’s experience of stuttering, than the 3-minute speech sample used to make my analysis. 

This brings us to point number two about the clinical irrelevancy of disfluency counts, which is…

Stuttering is defined by the speaker’s experience, not the listener’s

This principle has been advocated by the stuttering community for decades, and reiterated in recent research. Tichenor and Yaruss’ (2020) study investigating “Stuttering As Defined By Adults Who Stutter” firmly establishes this fact, as well as the harmful clinical implications of defining stuttering based on a listener’s experience:

Each person who stutters exhibits a unique and individualized constellation of behaviors and reactions as compared to someone else who stutters. These behaviors and reactions develop based on each person’s individual experiences and tendencies. All of these individuals are stuttering, even though there may be seemingly great differences in the presentations of stuttering. For example, many people who stutter exhibit so-called stuttering or stutter-like behaviors that can easily be observed by listeners. Importantly, however, other people may engage in behaviors to hide stuttering, such as avoiding sounds or words, switching words, or choosing not to talk as a response to the underlying sensation of being stuck (Tichenor & Yaruss, 2019). According to this framework, such individuals would also be considered to be stuttering, even though they do not demonstrate overt stuttering behaviors that listeners might perceive. Thus, just as there are many aspects of this stuttering constellation, there are many phenotypes of stuttering, as past models have theorized (see Yaruss & Quesal, 2004).

[...] the diagnosis of stuttering is frequently made by clinical observation of stuttering behaviors. In fact, the Stuttering Severity Instrument—a commonly used evaluation protocol—assesses stuttering based solely on how often behaviors happen as observed by the listener, how long in duration those observable behaviors are, how distracting these behaviors are to the listener, and how natural a person’s speech sounds are to the listener (Stuttering Severity Instrument–Fourth Edition; Riley, 2009). Data from this study show that a person may experience stuttering and self-report to be a person who stutters (even severely), regardless of whether or not they exhibit such behaviors or whether a listener can perceive them. Other research evidence suggests that covert stuttering behaviors may be relatively common across the population of people who stutter (Constantino et al., 2017; Douglass et al., 2018; Tichenor & Yaruss, 2019). Thus, in order to accurately diagnose a person who stutters or to appropriately include a person in a research sample of people who stutter, clinicians and researchers must account for the many and varied ways that the stuttering phenotype can be expressed. To continue to assess stuttering by virtue of how often certain behaviors happen likely underestimates the prevalence of stuttering and may lead to a higher likelihood of rejecting someone from services when they actually need them, discharging someone from therapy when they should not be, or considering someone recovered when they are actually still experiencing stuttering (Franken et al., 2018).

Tichenor SE, Yaruss JS. Stuttering as Defined by Adults Who Stutter. J Speech Lang Hear Res. 2019 Dec 12;62(12):4356-4369. doi: 10.1044/2019_JSLHR-19-00137. PMID: 31830837.

TL;DR: If you are only counting the parts of stuttering that a listener (including a speech-language pathologist, parent, teacher, or spouse) can hear, see, and observe, you are counting but a fraction of their full stuttering experience.

How should we record disfluencies?

While I am a firm believe that simply counting disfluencies and %SS is a ridiculous exercise, it is important to observe, inventory, and analyze how a person stutters. For most PWS, stuttering has some degree of physical manifestation, whether that is an internal sensation of being stuck, outwardly observable disfluencies, or a mix of both. Addressing the distressing physical components of the stuttering experience is a vital aspect of good stuttering therapy. And as all clinicians know, you can’t treat something if you haven’t properly evaluated it first.

So, we need to make sure that we are evaluating the physical stuttering accurately, and designing physical treatment approaches that directly address any distressing elements.

There are three components I evaluate when trying to understand a person’s behavioral experience of stuttering. In order of importance, these are:

  1. Internal experience
  2. Intensity
  3. Frequency

Internal Experience

I start with the internal experience of stuttering because, well, that’s what the evidence says is most important. The Overall Assessment of the Speaker’s Assessment of Stuttering (OASES) is the gold standard for getting this information, and my go-to (along with the interview). Starting with the OASES can provide clues as to how much of the person’s experience is internally vs. externally observable. More importantly, it tells me which parts of the stuttering experience are most distressing to the speaker. For some people, their physical stuttering is indeed the most disabling aspect of their experience. For others, though, the physical speech pattern is a footnote in a much larger psychosocial phenomenon.

Intensity

I do record speech samples during an evaluation. Though it is only a moment in time, that moment in time can indicate patterns of stuttering. That pattern may intensify or abate, depending on the speaking situation. 

Many PWS have some internal threshold of what makes a stutter “tolerable” (one that isn’t too bothersome) vs “intolerable” (accompanied physical and/or mental distress). In speech therapy, we often refer to these as “speed bumps” vs “roadblocks”. 

Speaking very generally, speed bumps tend to be less intense: they are shorter in duration, have minimal or workable amounts of physical tension, and - most importantly - they have a minimal impact on the overall communication.

Road blocks, by contrast, tend to be longer in duration and involve significant physical tension, struggle, and/or secondary behaviors. Roadblocks have a very real, usually negative, functional impact on communication. This impact could be internal (the speaker feels embarrassed, ashamed, etc.), external (it creates an awkward pause or may even reduce intelligibility), or a combination of both.

How a person stutters in their most intense moments indicates where to begin with treatment. If there is immense physical struggle, tension, or significant secondary behaviors, lightening those will reduce the intensity. If the moment is very long, finding ways to shorten it or introduce supportive strategies to move through it will soften the difficulty. 

Another way to think about intensity is quality. In stuttering, quality usually makes a bigger difference than quantity.

This brings us to…

Frequency

Last, and generally the least important component, is frequency. How often does the person stutter? This is what %SS and tools like the SSI try to capture.

My question is: why does it matter, how often they stutter?

If a person stutters very frequently (a high %SS), but it is comprised almost entirely of speed bumps and they indicate minimal internal distress (as measured by the OASES) - why does it make sense to introduce speech strategies and artificial communication exercises to change this? That only increases effort, and therefore decreases quality of communication. If the stuttering moments are non-intense and non-impactful, there is no functional reason to change the way they speak.

Conversely, if a person stutters very infrequently, but the moment is very intense when it occurs, they may demonstrate far more functional communication disability than someone with frequent, easy disfluencies. This is, in fact, a common clinical profile of PWS, the one referenced by Tichenor & Yaruss as at-risk for early, harmful discharge from therapy. 

In some cases, frequency can be a primary driver of severity. Small stutters that occur on every. single. word. can have a profound cumulative effect, in exactly the same way that twenty-five speed bumps in a one-block stretch would do a number on your car. This is why it is important to acknowledge frequency, and pay attention to how frequency relates to intensity and the speaker’s experience. 

Frequency as an outcome of therapy can be a proxy tool for progress - but sometimes, the opposite is true. Very often, long-term success in speech therapy is evidenced by a reduction in stuttering intensity and frequency. However, part of the journey may require an increase in frequency, particularly if the person was avoiding communication previously. If they start to talk more (progress!), that might come with more stuttering. In this case, an increase in frequency is a positive sign of growth and something to be celebrated!

A frequency measure is valid ONLY IF you are interpreting it through the lens of the speaker’s experience and how stuttering impacts their communication.

*But I have to give a severity rating with a quantitative speech analysis at my work! What should I do?

I have an outline below, but my first question is– have you tried not doing a quantitative speech analysis?

I will do my best to refrain from going on a long tangent about “unnecessary things taught in SLP graduate school”, and ask a question instead.

If you just didn’t do a fluency count, or %SS, or SSI, and left that out of the report, while still accounting for the speaker’s experience and pattern of stuttering: what would happen?

Maybe, depending on your work settings, you are required to give some “hard numbers” in your report. Here are some numbers you can give without resorting to fluency counts:

  • OASES results (gold standard)
  • Duration of stuttering moments (length of time)
  • Approximate number of stuttering moments in a given time frame

For an example of how to report qualitative and quantitative components of stuttering, here is a downloadable PDF with a narrative template.

Guess what: I have spent years submitting evaluation reports to insurance companies for clinical review, using only the above numbers. Do you know how many have been rejected for insufficient quantitative data? Zero. (They get rejected for other reasons, but that’s a whole other matter.) Why? Because there IS quantitative data! Thank you, OASES.

And do you know how many times the fluency count police have come after me? Zero. Why? Because there is no such thing as the fluency count police, despite the fact that most SLPs have a permanently instilled sense of fear of graduate school supervisors lurking under our beds to read our evaluation reports and report us to the ASHA ethics board for not doing what they taught us.

Finally, in the rare instances that I have had to explain to someone why I don’t do fluency counts in my report, I simply explain the rationale, citing evidence-based practice. I give the above information to establish why I don’t use fluency counts or the SSI as a diagnostic tool, and then provide my rationale for the alternative methods I use instead. The typical response? “Oh, that makes a lot more sense.”

So, how do you stop doing disfluency counts in your evaluations? Easy. Just…don’t do them. If you must give a frequency estimate, it is perfectly fine to eyeball it or do a rough approximation that may be +/- 10-20% “accurate” (whatever “accurate” means). Of the things that matter when it comes to understanding stuttering, deciding whether or not “we-we-we-we had a good time” is a word or syllable repetition is underneath the bottom of the barrel.

In Summary

Disfluency counts are not an accurate or evidence-based method for diagnosing stuttering and determining severity. Traditional tools like the SSI or %SS calculations may lead to misdiagnosis and/or interventions that do more harm than good. 

Instead, use the OASES paired with a “quality over quantity” approach to analyze behavioral components of stuttering. Center the speaker’s experience of their stuttering, using listener observations only as supplemental data.

You can stop including these numbers in your reports. Odds are, nobody is forcing you to use these outdated clinical tools–so give yourself permission to stop using them. By doing so, you will be advocating for a better understanding of what stuttering really is.

For more holistic, evidence-based resources for stuttering therapy, check out our 3Es resource page: speechIRL.com/3es