I’ve received several emails in response to Tuesday’s post about other ways to measure email engagement asking about the accuracy of open rates.

The main concern is that if the open rate data is bad, any additional analysis like I suggested is also bad.

For example, you might categorize someone improperly, and more likely as NOT engaged when they actually are. That’s because they may be opening your email, but if they have image blocking turned on, they are not counted as opening it.

The opposite is also potentially true: Someone who uses Outlook could be seeing your email multiple times in their preview pane with images turned on as they scroll through their inbox. They would be counted as opening that email, even if they never actually stopped to read it.

This is because opens are typically counted in two ways:

(1) when a small, invisible image inserted by your email service provider is downloaded from their server. This happens automatically and doesn’t affect the way your email looks. If images are blocked or turned off, that image is not downloaded and therefore the email service provider can’t tell if the email has been opened.

OR

(2) when any link in the email is clicked. Regardless of whether images are on or off, if any link is clicked, it’s typically counted as an open.

So, that means that any additional analysis based on open rates has this flaw built into the data.

It’s just the way it is, but frankly, I don’t worry about it, because . . .

(1) It’s the best thing we have at the moment, and it’s a consistent issue across all companies and sectors. It’s flawed in the same way for everyone.

(2) It means that your numbers are likely better than you think. I’d rather see under-reported numbers personally, than over-inflated ones.

(3) Remember, any click counts as an open too. If you are giving people plenty of opportunities to click (and you should be), then image blocking is really a mute issue.

(4) Odds are that you are conferring benefits on people who are on the “openers” lists, rather than punishing people who are not. It’s not like you would ever email someone who doesn’t appear to be opening and say, “You really suck. You must hate us, and therefore we hate you. Bye Felicia!” Instead, you are likely to send a more neutral message asking if they want to stay on your list, and asking them to click a link to say Yes.

Here’s a case in point. Laura, one of our regular newsletter readers, has image blocking on. She was concerned that I would consider her unengaged, when in fact, she is a regular reader.

So, I looked up her record in our CRM.

I have Laura down as opening 2 of the 4 newsletters in the experiment, but clicking on just 1 of the 4. I’m not entirely sure how that works as it seems like she should be tagged as clicking twice. I suspect that certain clicks (eg on admin links at the top and bottom of the mail) may not be counted as clicks, but would be considered evidence of an open.  In any case, she’s not categorized as unengaged just because she has image blocking turned on.

The bottom line: It’s true that open rates are an imperfect metric. But refusing to use the data to do the analysis I did to separate our best followers from the rest of the list is silly, I think. Don’t let the perfect be the enemy of the good on this one. Include lots of opportunities to click, and your data will be much better.