Perhaps some Canadian and Toronto-based readers of this blog attended, as I did this morning’s breakfast event put on by CISION called: Proving PRs Value Through Measurement.
Kudos to CISION for taking something like this on. If the number of attendees was their measure of success, then they clearly met their objective. It was one of the better attended measurement events in moons. Free admission and free lunch aside, it’s a topic that the practitioner set always want to hear about.
With the exception of the line in the invite that said “understand the value and attributes of various measures such as advertising equivalanecy (yikes), something I took issue with last week, it appeared promising. Promising not so much because of the wording of the invite, as it was due to the fact that a heavy hitter from the organization formerly known as Delahaye was on the presenting hot seat.
Frankly, I’d have expected more. While there were, in fairness, some noteable nuggets, there are a few things that struck me as problematic. In no particular order…
First, they called it “proving PR’s value through measurement.” I didn’t see proof and they’re not measuring PR, they’re measuring media coverage. There’s so much more to communications measurement than media coverage. I’m confident Delhaye knows that, it’s just curious that it wasn’t acknowledged. That the presentation threw terms like net “effect” and “impact” at us worrys me. We’re talking about measuring, albeit in a reasonably sophisticated, more sophisticated than most are used to seeing anyway, but not in a new way, the quantity and quality of coverage. Full stop. Using terms like effect and impact in this context is misleading because we’re really talking about the POTENTIAL to affect and POTENTIAL impact. Eyeballs don’t equal impact. It would sure make my work life easier if it did. To measure that, you’d need (one among many examples) to take the research one step further into polling and look at linking polling data to the media content analysis. What’s weird is that I know Delhaye does this kind of work and other fancy methods like market mix modelling and they are often, rightly, held in very high regard for the work they do. So why didn’t we see some of that? This was hardly putting their best foot forward.
Second, one of the problems with the CISION version of a net effect score (though I appreciate what they are trying to do and it’s atleast commendable that they reduce the number of impressions based on the quality of the article) is that it assumes that everyone that receives a copy of a publication or everyone that had an opportunity to see the aritcle, DID see it. Obvious flaw there. Doesn’t account, for example, for the different consumption patterns of heavy vs. light readers.
Third, their position on advertising equivalency was a source of debate. I understand the point that it can be useful in some contexts if we use it not in absolutes (x$) but in relative terms (it went up 10% month over month). But, I though it could have been more clearly articulated and it’s a difficult concept to get across and a tough sell when you have an audience that–rightly or wrongly–has grown accustomed to seeing it as a no no. A friend and colleague of mine likens the use of AVEs as being like smoking. We all know we’re not supposed to do it, but many of us do.
Fourth, this idea that if a message is present, it must be positive. Not so. Rare, but a message could very well be dragged into the article and countered by an industry analyst or a critic. We saw this all the time at Cormex Research. It’s an important variable to track. That’s not accounted for in the CISION methodology.
So, again, kudos to CISION for putting this on. You can’t please everyone. It’s important that the industry keep hearing about and talking about measurement if we are to continue to sharpen our collective measurement pencils. But I worry sometimes about events that claim to talk best practices and don’t fully deliver.