by Matt Fenwick, UX Mastery
September 12, 2013
Readability tests promise so much. Just take a sample of your text, go to a free online tool, paste the text in, and out comes a number showing how easy your text is to understand.
If only it were that simple.
Readability tests have copped a lot of flack. Critics say that they’re far too simplistic to accurately predict how an actual reader will respond to your text.
I completely agree. And I use readability tests all the time.
I’m going to run through the limitations of these tests. Then I’ll show how, if you keep these limitations in mind, the tests can be immensely useful.
Readability tests have the same basic approach. They count the number of syllables, words or sentences in your text, then find the ratio between these elements. This formula generates a number, which you compare against a standard to determine how readable your text is.
I’ll go through one test to show you how they work.
The Gunning Fog Index aims to show how many years of formal education a reader would need to understand a text. The Index takes the average number of words per sentence, adds the percentage of complex words (words with three or more syllables), then multiplies the result by 0.4.
If the number is 12, that means someone would need 12 years of formal education to understand the text: they finished High School.
There are many other tests out there. The Fleisch-Kincaid Index is also popular—partly because it’s built into Microsoft Word.
As the old saying goes, if something seems too good to be true, then it probably is. Critics say that you can’t use maths to predict how easy a text is to understand. A common criticism is that the number of syllables doesn’t always predict readability.
Here are two examples.
A further criticism is that the readability tests don’t tell you if a sentence makes sense. Take this example:
“This tree is jam.”
The words are simple, so the sentence would score well on a readability test. But it makes no sense whatsoever.
I use readability tests when I want a quick indication of how readable a chunk of text is—or when a numerical measure will appeal to stakeholders. Even taking the criticisms above on board, we can say that readability tests have some predictive value: if a readability test shows that content has problems, this will often be true.
There’s a reason why convoluted writing is engrained: professional people are used to writing this way. Simply hearing a writer’s opinion is often not enough to convince them to alter the habits of years—decades even. And it’s hard to have strategic conversations with senior executives about the state of communication in their organisation by combing through each sentence in a document.
That’s where readability tests come in. Because they generate numbers, you can aggregate data. For example, when I tested a 2,000 page website recently, I could say that: “Your target audience will find it difficult to understand 80% of your pages.” This then feeds into decisions about how much work is needed to bring the content up to scratch.
I’ve also used these tests with small-business owners as a starting point for conversations about how they can make their web content clearer, and more concrete.
I would only ever use readability tools as a diagnostic tool for content as a whole. The tools aren’t sophisticated enough for sentence-by-sentence analysis—or to be a checkpoint when clearing document.
The gold standard will always be testing the content with actual users. But if you need a rough picture, then readability tests are a useful tool for your kit.
SOURCE: Matt Fenwick
VIA: UX Mastery